Oct 4, 2012

Lock Escalation Point


This week, at our office we had a discussion over lock escalation.  We were discussing about a large query which returns hundreds of thousands of rows.  The query touches some key tables where critical inserts are happening to those tables.  Even though the numbers of rows are large, the table has millions of rows.

We were worried about the locks it is going to place.  Will it be row lock, page lock or even table lock?
Unfortunately, according to books online couple of key points to be noted:
  • The Database Engine does not escalate row or key-range locks to page locks, but escalates them directly to table locks. Similarly, page locks are always escalated to table locks. 
  • A single Transact-SQL statement acquires at least 5,000 locks on a single table/index or partition for it to get escalated.  (If it uses the maximum memory allocated for locks, it also triggers the lock escalation to happen)


I got this information for SQL Server 2008 R2, as there is no update in SQL Server 2012.  I believe the facts remain same for SQL Server 2012 too.  http://msdn.microsoft.com/en-us/library/ms184286(v=sql.105).aspx

It means whenever we query a table and read around 5000 rows (Actually, when the actual number of rows involved are less than 5000, SQL Server places a intend shared lock on the table and intend shared locks on every page touched. When this total reaches 5000, you can expect a lock escalation.

We were worried.  If our table has millions or billions of rows, and for every few thousand rows SQL server going to escalate the locks into table locks, what will happen to all the inserts and updates?  Do they have to wait for the select to complete? For us the inserts and updates are far more important than the select statements. 

So we went on more testing to see how it happens.  Even though the document says so, these figures are not magical numbers.  It differs from Table to table, and environment to environment.
I tried this on two different tables: Sales.SalesOrderHeader in AdventureWorks and a new table drived from Sales.SalesOrderDetail.  I created a new table for this purpose, as some of the large tables in AdventureWorks database have triggers, calculated columns etc. which in turn started calling other tables.  

This is my script
IF OBJECT_ID('MySales') IS NOT NULL
      DROP TABLE MySales
GO
SELECT * INTO MySales FROM Sales.SalesOrderDetail
GO
ALTER TABLE MySales Add CONSTRAINT PKC_MySales PRIMARY KEY CLUSTERED (SalesOrderDetailID)
DECLARE @i int =1, @Max int
SELECT @Max =MAX(SalesOrderID) FROM Sales.SalesOrderDetail
WHILE @i <= 5
BEGIN
      INSERT INTO MySales (SalesOrderID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate)
      SELECT @Max +SalesOrderID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, NEWID(), ModifiedDate
      FROM MySales
      SET @i+=1
      SET @Max*=2
END
GO

I ran the tests on those tables using different queries

My First test was on selects.  As you may know, under default  READ_COMMITTED isolation level select queries locks will be removed as and when the read is completed.  So I went with REPEATABLE READ isolation level.  In this level, the locks remain until the transaction is committed or rolled back.

BEGIN TRAN
SELECT * FROM Sales.SalesOrderHeader WITH (REPEATABLEREAD) WHERE SalesOrderID <=49726

GO
SELECT COUNT(*) as lock_count,
       l.request_mode,
         l.resource_associated_entity_id,
       l.resource_type
FROM   sys.dm_tran_locks l INNER JOIN
         sys.sysprocesses p ON l.request_session_id = p.spid
WHERE l.request_session_id = @@spid
GROUP BY l.resource_associated_entity_id,
          l.request_mode,
          l.resource_type
GO
ROLLBACK

Later I replaced the select statement in line 2 with  

SELECT * FROM MySales WITH (REPEATABLEREAD) WHERE SalesOrderDetailID <=6166

For the updates, I had a the query below

UPDATE MySales SET ModifiedDate = GETDATE() WHERE SalesOrderDetailID<=6166

UPDATE Sales.SalesOrderHeader SET ModifiedDate = GETDATE() WHERE SalesOrderID<=49726

The number at the end of the query is changed (I either increased or decreased the number to increase the number of locks and to find the escalation point)

On my tests, I found that based on the situation, the lock escalation point varied slightly.  In all cases lock escalation happened when I retrieved around 6100  - 6200 rows. There were additional page level intend locks and table level intend lock as well.  

In SQL Server 2008, there was in issue where inserts/updates where even for very high number of updates, the system went with rowlocks.  Adam mechanic has found the issue regarding inserts and has blogged about it too. http://sqlblog.com/blogs/adam_machanic/archive/2009/10/30/sql-server-2008-lock-escalation-inserts-and-a-potential-bug.aspx Interestingly, the connect item he created (This is mentioned in the blog post, and you can see the status here:  https://connect.microsoft.com/SQLServer/feedback/details/506453/lock-escalation-no-longer-kicks-in-for-inserts-in-sql-server-2008) is closed by Microsoft as fixed. 

Sounds like SQL Server 2012 has changed the locking mechanism slightly, where the escalation point is not exactly 5000.  So far it is not mentioned anywhere about the change.

So what is the take home lesson:  Make sure you don’t touch large amount of rows from tables which needs frequent changes, critical updates, or time critical response. Try to work with smaller chunks of data and merge them in your application if needed.  You can also consider using staging tables.

Sep 29, 2012

Supplement to page Splits: Change in SQL Server 2012


On one sentence, Page splits does not always make a 50/50 split in SQL Server 2012!

Recently I was preparing for a training and checking on page splits. Suddenly I thought that it is better to tell people about the numberof page splits can occur in a single statement and the amount of activities it can take. I found something new … well its new to me,  as my subsequent search over the internet didn't give any answers.

Sep 16, 2012

My Investigation On Heap Table: Why It Shows Data Not In Order?

Recently, while I was looking at DatabaseLog table in AdventureWorks database, I found something wired. Before going further, let us look at some important information about this table. I publish only the information of my concern.

First of all, this table does not have a clustered index. It has DatabaseLogID which is an identity column which is also the primary key, non-clustered. The next point is that it has variable length columns.  The wired thing came, when I executed SELCT * from DatabaseLog statement.  


SET QUOTED_IDENTIFIER, ANSI_NULLS ON
GO
CREATE TABLE dbo.DatabaseLog(
     DatabaseLogID int IDENTITY(1,1) NOT NULL,
     PostTime datetime NOT NULL,
     DatabaseUser sysname NOT NULL,
     Event sysname NOT NULL,
     Schema sysname NULL,
     Object sysname NULL,
     TSQL nvarchar(max) NOT NULL,
     XmlEvent xml NOT NULL,
 CONSTRAINT PK_DatabaseLog_DatabaseLogID PRIMARY KEY NONCLUSTERED
(
     DatabaseLogID ASC
)


To my surprise, the data didn’t come in ascending order of databaseLogID. In fact it has no order. This wake my curiosity and I started digging further.



When the table does not have a clustered index, the table is considered a heap.  It is not just the name. Consider a heap of stones stored along the roadside for some repair. There is no order, and any stone can be considered as the first. In database world, heap table also considered as the same.
But my question was different.  Okay, there is no order, but I am getting the results in some order. Not the primary key order, but some other order.  What is that order?   

Aug 9, 2012

SS SLUG August 2012 meetup is at Pearson Lanka

As most of you have noticed or even received invitation, this month's user group meeting is happening at a different place.  Pearson Lanka (formally known as eCollege) is hosting the meeting at their facility. yes it is where i am working.  (not only me, around eight other great DBAs including Dinesh Asanka MVP, Susantha Bathige too.)  

There are couple of reasons behind the location change.

  • It gives the users some access to companies where SQL Server is used
  • Companies like Pearson consider supporting user group meetings as part of their social responsibility
  • People living/working closer to these areas have better chance of participating

 Actually we were planning to have the first meeting outside our usual place (our usual place is Microsoft building) last month at Navantis, but we couldn't work it out within the short notice. But we are planning to have meetings at different places
Since the location is different, Pearson decided to support with transport facilities. There will be vans picking up people from WTC just before the meeting and drop them back. For those who like to go through Borella, there will be a van to drop them there too.

I will be speaking on Using Extended Events. it is one of the most enhanced feature and to be the framework for server monitoring.

There are lightning talks, with around 10 speakers.

As some of you would have noticed that there is a section at SqlServerUniverse.com http://www.sqlserveruniverse.com/T-SQLTeaser.aspx Hurry up and send your answers on or before 10th of August.
Pearson decided to give some gift to one of the best answers. I am not supposed to spell what the gift is.
Come to the meeting to see what happens.

Hope to see you there  :)
  

Jul 4, 2012

SQL Server Universe is online again

As some of you know SQLServerUniverse.com, the site managed by SQL Server Sri Lanka User Group was down for a couple of months.  Unfortunately, the crash was so serious, we have to rebuild it from almost scratch. Gogula (http://dbantics.wordpress.com Twitter @gogula) did an amazing job in that and the site is online now.

Right now the site does not have all the features. We are still in progress of making it.   The most fearful thing is we have lost all the recent activities. However, we have some of the backup material and I have confidence that we can upload most of the articles again. they were written by our Sri Lankan community leaders and its not that difficult to get it from them once again.

As you know this is a place of Sri Lankan professionals and enthusiasts to talk about SQL Server and related products and technologies, (and about confronting products and technologies too, at times) it is high time to register again to the site.

Please visit the site:  http://www.sqlserveruniverse.com


update @ 04 Jul-2012 11:00 am (+5:30):
Gogula told me that the user accounts are fully restored. YES.  He is working on uploading the contents again.  

Apr 15, 2012

New certification Paths



Interestingly,  I got an email on 11th from a friend of mine inquiring about the certifications Microsoft is offering.  I know that Microsoft was planning to bring new set of certifications.  So While writing the reply when I checked the site, I found out the Microsoft has just updated the info.

Microsoft has done a major restructuring of the certification program related to SQL Server and related technologies.  Its look like Microsoft believes that, the current program was developed couple of years ago and should be changed to meet the current market trend.  Microsoft is placing a lot of weight on cloud side.  According to Microsoft, this trend will eventually applied to other areas (such as Development and Windows Administration) as well. It is divided into more granular form.

From remarks I see many have worried about the name change: Earlier we had MCSE for Microsoft Certified Software Engineer,  and it moved to MCDBA, later it changed into MCITP and now we are back to MCSE (Microsoft certified Solutions Expert) (Yes, the world is round but not fully round).  Most of the HR firms will find it difficult to keep updating their staff and recruitment process.  I still see vacancies referring to MCDBA certification, which was the certification for SQL Server 2000.

For the DBAs the biggest change is narrowing tracks of database Development, database administration and Business intelligence.  Actually to pass the first level (MCSA) you need to pass three exams:  70-461 (Querying Microsoft SQL Server 2012), 70-462 (Administering a SQL Server database)  and 70-463 (Implementing a Data Warehouse with Microsoft SQL Server 2012) . It means every database professional who wishes to have a certification should know BI stuff and administration stuff as well.

Let me warn you about this exam:  These are not easy exams. 70-462 talks about many advanced stuff including using trace flags, DBCC commands, TDE, Clustering, extended events and always one. Similarly, 40-463 includes Data Quality Services, advanced stuff on SSIS (Fuzzy actions, Optimizing performance etc.), package configuration, Deployment, Using PowerShell, SCD type 2 and Master data Services.
I remember someone telling, that he would score more on the Business Intelligence exam if he is permitted to take dice to the exam hall.  :)  I am sure BI guys may have the same sentiments towards the administration exam too.

Ultimately, if the exams are harder, will many people take the exam?  There is fear that it may encourage many people to use brain dumps rather than studying for the exam.  Also, many people will try to go for classes, and memorize that is being said, rather than studying on their own.  It could be true.  Additionally, the cost of gaining a certification is also increased (the price for each exam was increased last year; and you need to pass more exams as well) which may create more reluctant DBAs (Some companies reimburse the exams if their employees pass the exam which may be an incentive.)

Apr 14, 2012

Page Splits Part 4: Cost and Ways of Controlling

This is the final part of the series. I was not initially planned to write this series in a week interval, but due to various reasons it got dragged.

Cost of Page Splits

Many writers have written many times about this, so I like to summarize them

  • It needs to take the last page and delete half of the content
  • In order to do that this and maintain the consistency, the activity is logged.   
    • It is this operation when captured reading the log or by using extended events provides information about page splits.
    • Microsoft has done a performance improvement in page splits, by making them not transactional. It means that the page split operation could not be rolled back. Since the page is at the verge of split, we may have to perform a page split at a future time anyway. Microsoft keeps the splitted pages for a future need. But to make sure the backups and high availability features work well, it will be logged. Just enclose my code with BEGIN TRAN and ROLLBACK TRAN and check the space used by the table. I also checked with LOP_UNDO_DELETE_SPLIT operation in extended events as well as transaction log, and it does not fire when a transaction is rolled back.
  • If the data file already full (mean all pages are allocated; it does not necessarily mean that all pages are fully utilized) , this insert/update happens on a page which needs a page split and hat needs a new page.  Since there is no extra page available, it may lead to file growth. (Unless instance file initialization is set and SQL Server start-up account has right to perform volume maintenance tasks, this will be a time consuming operation.)
  • Your insert and update statements could become costly. This may lead to slowness in the OLTP operation.
  • A simple update could take longer time than expected. I remember reading an article about ripple effect of page splits. I believe it is from Kimberly Tripp (Sorry I am unable to find any links). Your update statement could cause the a page split and the page you intended to store the new/updated row is already (almost) half full. If the new cannot be inserted into the balance half, it may lead to another split..
  • Since the data is divided, each page will contain less rows and more empty space.  This is called internal fragmentation.  This leads to more pages to be read when scanning or seeking on a range. It also makes your database and backups bigger. Some high availability options like transactional replication, database mirroring and log shipping too will get higher load.   
  • Since the new page logically fits in the middle of the content, but physically located in the next available free page, they are not aligned together. (Unless you delete at least one data page in the appropriate page which may give you some space to place the new page.  This is an extremely rare scenario.) This is called external fragmentation.  It leads to unnecessary rotation of disk which means more time to read the data.

Controlling Page Splits

Page splits cannot be stopped completely but there are two important aspects on controlling page splits and its effect.
  • First of all periodically re-indexing the fragmented tables/indexes will remove fragmentation. It should be part of the job of the DBA
  • Another preventive action is to control shrink database or shrink file operation.  While this operation can reduce the size of the database, it increases fragmentation. (I have written about it before. Visit this link: http://preethiviraj.blogspot.com/2011/11/will-performance-improve-if-database.html In case you are in absolute need of shrinking the database, you may have to rebuild the clustered index again)

The other side of controlling page splits is part of the design of the database.
A couple of things could be done here:
  • By introducing the identity column, a numeric column getting sequence value or Sequential GUID as the clustered index. Since this will increase sequentially, it will prevent the inserts from causing page splits. Generally, user defined clustered keys have a tendency of not incrementing naturally, unless it is similar to serial number.
  • Minimize the use of variable length columns.  Fixed length column pad additional space so they could be used during updates. I have seen a similar approach in no-sql (or non-relational to correctly say) implementations where programs manually pad extra space for future use.
  • I having a proper fill factor:  Generally keeping the fill factor of the index near 80% -90% is a good idea, but the number should be verified in each environment.
Final thoughts
I need to warn about one thing about using some of the methods/scripts used in this series. Some of the scripts/|Methods are are resource intensive and may slow down the instance.For example, if you try to track operations in transaction logs to identify page splits, you may make the application to slow down. So it should be used as the last resource. Also be aware of using some of the undocumented feature on a longer run.  Microsoft may change them at any time.  Additionally, it could lead to losing the support of Microsoft on certain issues/problems too. Please check with Microsoft about what you are trying to use and the impact.




With this note, this series comes to an end. Hope it has provided a very good background to all my readers.


References and further reading:

I went through a lot of articles and my own tests before producing this series. I believe I have mentioned all the possible references at the appropriate places, But there could be certain citation I could have missed. It is because, at the time of reading I do not think about writing about any point mentioned there. It is later, I get the idea of writing about something, I have the memory about the content, but it is in my words. I may use my words to search and lose the actual article. I have not mentioned various parts of books online here, but it is one of my sources always.

Another good article but written some time ago.  I was under the impression that it was written by Brad McGehee (from Red-gate.com when he was handling sql-server-performance.com. I may be wrong. The current link simply says it was written by Admin.) The title says the purpose.

One of the greatest articles from a SQL Server Guru, Paul Randal, which explains about the cost of Page splits in terms Log operations. This is the article which provoked my curiosity which led to this series. I will call this a must read, if you are really interested in in depth knowledge.

Talks about page splits and transactions. nice article to read.

Apr 12, 2012

Page Split 3.5: Advantage of Extended Events in SQL Server 2012

[Updated 13th April 2012 12:27 am] Note: Jonathan Kehayias (Blog)  pointed out that some of his work is being referenced but not cited. Since it is a series, I was under the impression that all references could be published at the end; but Jonathan updated me that it should be there with each post. So it is mentioned at the end of this post. Regret for the anxiety caused.  Thanks Jonathan. I will be updating other posts int his series.
Unlike previous versions, SQL Server 2012 provides more details on page splits. You you all would have seen in the first part of the series, that page split can be an event, where a new page added to the table.
The page split we should be worried is an existing page splits to accommodate either a new or an existing row.
In SQL Server 2012 extended events, we can track the database, the sql statement which caused the page split and even the type of page split.

Lets use this example: Lets select the page split test, extended event session and (if it is stopped , start and) open the watch live data. (There could be a delay in showing the events int he screen, but they are captured live)
Now execute the following script

IF DB_ID('page_split') IS NULL
       create database page_split;
go
use page_split
go
IF Object_Id('dbo.Note') IS NOT NULL
drop table Note
GO

CREATE TABLE dbo.Note
(
NoteID int NOT NULL CONSTRAINT PK_Note PRIMARY KEY CLUSTERED (NoteID),
NoteText char(1300) NOT NULL
)
GO

Insert into dbo.Note Values (1,REPLICATE('a', 1300));
GO
Insert into dbo.Note Values (2,REPLICATE('b', 1300));
GO
Insert into dbo.Note Values (3,REPLICATE('c', 1300));
GO
-- The row with NoteID = 4 is missing here
GO
Insert into dbo.Note Values (5,REPLICATE('e', 1300));
GO
Insert into dbo.Note Values (6,REPLICATE('f', 1300));
GO
Insert into dbo.Note Values (7,REPLICATE('g', 1300));
GO
Insert into dbo.Note Values (4,REPLICATE('d', 1300));
GO

When the code is executed, we can see three page split related information getting into our watch window.
(Lets add the following columns from details section to the table view: database_name, SplitOperation, new_page_file_id, new_page_page_id, page_id, sql_text)

The watch window will look like this:
 

Apr 11, 2012

Page Splits Part 3 – Identifying through Extended events


My sincere apologies to all in delaying producing part 3 of the series.  I got into a set of urgent tasks which prevented me from updating my blog.

SQL server has one more mechanism which allows us to identify page splits.  This feature called extended events introduced with SQL server 2008 and According to Microsoft sources,  eventually, it will replace SQL trace.


SQL Server 2012 extended the feature not only by introducing new events but also by introducing a new UI for capturing extended events
By using extended events we can identify the database where the page split has occurred.  As you may remember, by using performance monitor we couldn’t identify the database where the page split has occurred.


In addition, extended events provide more information, which we couldn’t gather before.


Let us have a walk-through of identifying page splits through extended events
In SQL Server 2012, extended events (commonly referred as XE) management is part of the management tab. Under management there is a section for extended events, and it has a folder named sessions.  There are two sessions already. System-health is the XE equivalent of default trace.
Let us right click and select new session, as shown in picture 1. (New session wizard too will take you to the same destination, but to understand the basics, let us bypass the wizard).

Mar 17, 2012

Adding NOT NULL columns to existing table

Adding a column with NOT NULL feature to an existing table requires a default value to be inserted.  If the table is large, the operation can take more time and resources.  Importantly, this operation requires exclusive table lock. Before SQL Server 2005, this operation will prevent other users from accessing the table (unless NOLOCK query hint is specified or the session is in READ_UNCOMMITTED mode) until all the data pages were updated with the default value.

With SQL Server 2005,  two (some call it one and a half!)  new transaction isolation levels were introduced which allows the snapshot of the table to be copied into tempdb and accessed by other users. Users using READ_COMMITTED_SNAPSHOT or SNAPSHOT isolation levels can access the data as at the column does not exists. However the cost involved with the operation is still high and It too a reason for page splits.

People generally prefer not to issue NOT NULL clause with the ADD COLUMN statement.  Adding a column with null value and then updating the table with the default value, chunk by chunk is one of the mechanism employed to prevent the issue to some extend.  This method will reduce the locks and increase the    availability of the table. However, it will not prevent the page splits.

With the introduction of SQL Server 2012, we do not have to worry much about it.