This week, at our office we had a discussion over lock
escalation. We were discussing about a
large query which returns hundreds of thousands of rows. The query touches some key tables where
critical inserts are happening to those tables.
Even though the numbers of rows are large, the table has millions of
rows.
We were worried about the locks it is going to place. Will it be row lock, page lock or even table
lock?
Unfortunately, according to books online couple of key
points to be noted:
- The Database Engine does not escalate row or key-range locks to page locks, but escalates them directly to table locks. Similarly, page locks are always escalated to table locks.
- A single Transact-SQL statement acquires at least 5,000 locks on a single table/index or partition for it to get escalated. (If it uses the maximum memory allocated for locks, it also triggers the lock escalation to happen)
I got this information for SQL Server 2008 R2, as there is
no update in SQL Server 2012. I believe
the facts remain same for SQL Server 2012 too.
http://msdn.microsoft.com/en-us/library/ms184286(v=sql.105).aspx
It means whenever we query a table and read around 5000 rows
(Actually, when the actual number of rows involved are less than 5000, SQL
Server places a intend shared lock on the table and intend shared locks on
every page touched. When this total reaches 5000, you can expect a lock
escalation.
We were worried. If
our table has millions or billions of rows, and for every few thousand rows SQL
server going to escalate the locks into table locks, what will happen to all
the inserts and updates? Do they have to
wait for the select to complete? For us the inserts and updates are far more
important than the select statements.
So we went on more testing to see how it happens. Even though the document says so, these
figures are not magical numbers. It
differs from Table to table, and environment to environment.
I tried this on two different tables: Sales.SalesOrderHeader
in AdventureWorks and a new table drived from Sales.SalesOrderDetail. I created a new table for this purpose, as
some of the large tables in AdventureWorks database have triggers, calculated
columns etc. which in turn started calling other tables.
This is my script
IF OBJECT_ID('MySales') IS NOT NULL
DROP
TABLE MySales
GO
SELECT * INTO MySales FROM Sales.SalesOrderDetail
GO
ALTER TABLE MySales Add CONSTRAINT PKC_MySales PRIMARY KEY CLUSTERED (SalesOrderDetailID)
DECLARE @i int =1, @Max int
SELECT @Max =MAX(SalesOrderID) FROM Sales.SalesOrderDetail
WHILE @i <= 5
BEGIN
INSERT
INTO MySales (SalesOrderID, CarrierTrackingNumber,
OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate)
SELECT
@Max +SalesOrderID, CarrierTrackingNumber,
OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, NEWID(), ModifiedDate
FROM
MySales
SET
@i+=1
SET
@Max*=2
END
GO
I ran the tests on those tables using different queries
My First test was on selects. As you may know, under default READ_COMMITTED isolation level select queries
locks will be removed as and when the read is completed. So I went with REPEATABLE READ isolation
level. In this level, the locks remain
until the transaction is committed or rolled back.
BEGIN TRAN
SELECT * FROM Sales.SalesOrderHeader WITH (REPEATABLEREAD) WHERE SalesOrderID <=49726
GO
SELECT COUNT(*) as lock_count,
l.request_mode,
l.resource_associated_entity_id,
l.resource_type
FROM sys.dm_tran_locks l INNER JOIN
sys.sysprocesses p ON l.request_session_id = p.spid
WHERE l.request_session_id = @@spid
GROUP BY l.resource_associated_entity_id,
l.request_mode,
l.resource_type
GO
ROLLBACK
Later I replaced the select statement in line 2 with
SELECT * FROM MySales WITH (REPEATABLEREAD) WHERE SalesOrderDetailID <=6166
For the updates, I had a the query below
UPDATE MySales SET ModifiedDate = GETDATE() WHERE SalesOrderDetailID<=6166
UPDATE Sales.SalesOrderHeader SET ModifiedDate = GETDATE() WHERE SalesOrderID<=49726
The number at the end of the query is changed (I either
increased or decreased the number to increase the number of locks and to find
the escalation point)
On my tests, I found that based on the situation, the lock
escalation point varied slightly. In all
cases lock escalation happened when I retrieved around 6100 - 6200 rows. There were additional page level
intend locks and table level intend lock as well.
In SQL Server 2008, there was in issue where inserts/updates
where even for very high number of updates, the system went with rowlocks. Adam mechanic has found the issue regarding
inserts and has blogged about it too. http://sqlblog.com/blogs/adam_machanic/archive/2009/10/30/sql-server-2008-lock-escalation-inserts-and-a-potential-bug.aspx Interestingly, the connect item he created (This is
mentioned in the blog post, and you can see the status here: https://connect.microsoft.com/SQLServer/feedback/details/506453/lock-escalation-no-longer-kicks-in-for-inserts-in-sql-server-2008)
is closed by Microsoft as fixed.
Sounds like SQL Server 2012 has changed the locking
mechanism slightly, where the escalation point is not exactly 5000. So far it is not mentioned anywhere about the
change.
So what is the take home lesson: Make sure you don’t touch large amount of
rows from tables which needs frequent changes, critical updates, or time critical
response. Try to work with smaller chunks of data and merge them in your application
if needed. You can also consider using
staging tables.
1 comment:
Fantastic - learned a lot about lock escalation, thanks preetiviraj! Also see this one:
Lock escalation example
Post a Comment