Aug 28, 2009

Log Shipping Issue

When we setup Log shipping manually it seems to be an easy task. But within a couple of days we were proved to be wrong. The log shipped database didn’t get the updates on time. Additionally we started receiving messages stating the log backup has failed. It took some time to identify the cause. All backups were set to start at the same time... So we manually changed the starting time so they started working fine.

Soon we faced the next situation: Since we have more than 60 databases, and setting them independently while giving enough room for log shipping to complete became a problem. Wither the time si not enough or we step into another log shipping time. Doing such a minute calculation was little costly for the operation we did, so we decided to live with this problem until the migration completes.

Do we have a solution for it. I think we have. We are planning to try that next week and I will update when we do that. Please keep your breath until then.

Aug 26, 2009

Database Migration

Today we completed migration of one of the database server (64 databases and around 200 GB of size with around 200 articles are replicated). The databases were moved to new server to a new hard disk location, with up to the minute data, logins, jobs and replication. Normally transferring the data will take a longer time. So we decided to adopt a new mechanism to make sure all databases are transferred while users are using the old server itself.


We used Log shipping to transfer the data.

There are a couple of things to learn from the experience. We had a couple of issues:
  • Unexpected server restart,
  • Failure of restore the tail log backup
  • Restore over an already restored database.
  • Replication failing constantly reporting "login failed for 'sa'".
With all these issues we were able to complete the tasks with 2 hours of downtime.

I am planning to write what we have done and lessons learned from all the issues for the next few days. Stay tuned!

Aug 25, 2009

The Tipping Point.

Many DBAs have faced with execution plans where it was an index seek for many days suddenly it became an index scan. (or vice versa). So when and how SQL Server decides what to use? (I am not talking about re-writing the query in different way so that the execution plan can get changed. it is the same query suddenly gets a new execution plan.)

SQL Server keeps statistics to determine the best method. When the data requested is beyond a particular percentage, the decision to scan is taken.

Interesting? Kimberly L. Tripp a great speaker, trainer and writer in SQL Server gives some more information:

http://www.sqlskills.com/BLOGS/KIMBERLY/category/The-Tipping-Point.aspx

Aug 24, 2009

Default Size of Log File

What will happen if you create a database without the specification of log file?  Yes, SQL Server will create the log file in its default location. What about the size? Will SQL Server create with the size of the log file in model database?


CREATE DATABASE Sales ON (

NAME = Sales_data,
FILENAME = 'D:\SQLData\SQL2k8\DATA\Sales.mdf',
SIZE = 60)

Interestingly,  SQL Server will create the log file with 25% of the data file. If you specify multiple data files,  log file size is determined by 25% of the sum of all data files

Are you getting a different size?

Aug 23, 2009

Average Function

Have you ever tried something like this:

SELECT AVG(NUM)
FROM
(
    SELECT 1 AS NUM UNION ALL
    SELECT
1 UNION ALL
    SELECT
1 UNION ALL
    SELECT
0

) AS A

Are you expecting an answerr of 0.75? You will be surprised to see the return value is only 0.
The reason behind this is how average function works.

If the input values are of integer data type, the average function too will return an average value.
So if you have a column which has integer data type, you need to convert it to decimal before calculating the average value

SELECT AVG(NUM*1.0)
FROM
(
    SELECT 1 AS NUM UNION ALL
    SELECT
1 UNION ALL
    SELECT
1 UNION ALL
    SELECT
0

) AS A


Are you happy now?

Have Fun with SQL and Prime Numbers

  • Want to have some fun with SQL Server?
  • Interested in math puzzles?
  • Like to win a $100 Amazon Voucher?
Try this out! Celko's Summer SQL Stumpers: Prime Numbers You have a better chance of having fun the the $100 voucher!

Aug 22, 2009

sys.dm_exec_query_optimizer_info

Have you ever thought about this dynamic management view: sys.dm_exec_query_optimizer_info
Books online gives a very basic information:
Returns detailed statistics about the operation of the SQL Server query optimizer. You can use this view when tuning a workload to identify query optimization problems or improvements. For example, you can use the total number of optimizations, the elapsed time value, and the final cost value to compare the query optimizations of the current workload and any changes observed during the tuning process. Some counters provide data that is relevant only for SQL Server internal diagnostic use. These counters are marked as "Internal only."

Just got this nice post from Ben Nevarez! It explains not only the the phases of query optimization, but also about how the DMV works!

http://sqlblog.com/blogs/ben_nevarez/archive/2009/08/20/the-phases-of-query-optimization.aspx

Have a look!

Aug 21, 2009

(Mis) Understanding Scans and Seeks

Have you ever expected an index seek or clustered index seek for a query, where SQL Server turned its back and decided to use scan?
Do you believe in this statement: "Scan reads through the object (table/index) from top to bottom looking for the records that it needs while seek goes directly to the part of the object that it needs and reads to where the data that it needs ends. This is obviously a must more efficient operation than a scan, as database server already knows where the data is that it is looking for"

If your answer is true for any of the questions, you need to read my article.
http://www.sql-server-performance.com/articles/dba/understanding_scans_seeks_p1.aspx

As the above site does not allow reader feedback, I am opening this area for your comments

Aug 20, 2009

Full Recovery Model

If you haven't change the recovery model of "model" database, you the new databases you create will be in full recovery model by default. However, it does not mean, all the changes done to the database are logged and kept.
Until the first backup is taken, all databases behave like they are in simple recovery model. So to take the full benefit of the recovery model, take the backup of the database immediately after creation.
Want to check it out?
Here is the test:
In any recovery model, transactions are written to log and forced to write to disk when transaction is completed. In simple recovery model, transactions will be marked inactive when these conditions met:
1. Transaction is either committed or rolled back.
2. The data page is written to disk.
So, These are the steps:
1. Create a database and a table there.
2. Add data one by one and check the log file size.
3. Add the same data in one statement and check the size.
4. Drop the database
5. Repeat step 1
6. Take a backup
7. Repeat steps 2 & 3
Please note, to avoid log file growth due to data pages waiting in memory (to be written to disk) we need to flush them to disk before the next insert.
Now try the code:


-- ********************* FIRST TEST ************************

SET NOCOUNT ON
-- Create database
USE master;
CREATE DATABASE RecoveryModel
GO
--Make sure it is in full recovery model
SELECT 'Before Backup' = DATABASEPROPERTYEX('RecoveryModel', 'Recovery')
GO
-- Create table
USE RecoveryModel;
GO
CREATE TABLE Test
(
TestID int IDENTITY NOT NULL PRIMARY KEY CLUSTERED,
SampleData nvarchar(max) DEFAULT(REPLICATE(N'a', 8000))
)
;
-- Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
CHECKPOINT; WAITFOR DELAY '00:00:05'
--Insert 1000 rows one by one
DECLARE @i int
SET @i=1
WHILE @i <>

BEGIN
INSERT INTO Test DEFAULT VALUES;
SET @i =@i+1
END


-- Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
CHECKPOINT; WAITFOR DELAY '00:00:05'


--Insert 1000 rows at once
INSERT INTO Test SELECT SampleData FROM Test WHERE TestID<=1000;

--Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
GO

USE master;
DROP DATABASE RecoveryModel;
GO

--******************** SECOND TEST *****************************
SET NOCOUNT ON
-- Create database
USE master;
CREATE DATABASE RecoveryModel
GO
--Make sure it is in full recovery model
SELECT 'Before Backup' = DATABASEPROPERTYEX('RecoveryModel', 'Recovery')
GO
-- Create table
USE RecoveryModel;
GO
CREATE TABLE Test
(
TestID
int IDENTITY NOT NULL PRIMARY KEY CLUSTERED,
SampleData
nvarchar(max) DEFAULT(REPLICATE(N'a', 8000))
)
;

BACKUP DATABASE RecoveryModel TO DISK = 'C:\RecoveryModel.bak';
GO
-- Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
CHECKPOINT; WAITFOR DELAY '00:00:05'
--Insert 1000 rows one by one
DECLARE @i int
SET @i=1
WHILE @i<1000

BEGIN
INSERT INTO Test DEFAULT VALUES;
SET @i =@i+1
END


-- Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
CHECKPOINT; WAITFOR DELAY '00:00:05'

--Insert 1000 rows at once
INSERT INTO Test SELECT SampleData FROM Test WHERE TestID<=1000;

--Check for size used; Fileid =2 refers to log file
SELECT fileid, [name], [size] FROM sys.sysfiles
GO

USE master;
GO
DROP DATABASE RecoveryModel;