SQL Server – Understanding Logging Under Full and Bulk-Logged Recovery Model

Logging in SQL Server is a vast subject and cannot be covered in a single blog post. This article attempts at explaining the minimal logging behavior under the Bulk-Logged recovery model from a high-level perspective.

Let us first understand what “Minimal Logging” means. The official definition given in books online is,

Minimal logging involves logging only the information that is required to recover the transaction without supporting point-in-time recovery.

The TechNet article above also talks about several operations that can be logged minimally. The following are the two main items derived from the minimal logging definition in the article above.

  1. Minimal logging only logs the minimum information that is required to roll back a transaction.
  2. Minimal logging doesn’t give the ability to restore a database to a point-in-time. To put it in a different way, a T-log backup cannot be restored to a certain point-in-time if it contains any minimally logged operations.

Let’s now understand what sort of limited information is logged in the T-log file as a result of a minimal operation when the database runs under 1) Full Recovery Model and 2) Bulk-Logged Recovery Model. From the TechNet article above, one of the minimally logged operations under the Bulk-Logged recovery model is “SELECT…INTO”.

Full-Recovery Model

Let’s see the logging behavior of this minimally logged operation when the database runs under the Full recovery model.

The very first step is to create a database and set the recovery model to full. This is followed by taking a full backup so the t-log is not truncated as in the simple recovery model. Next task is to create another database called [Sample] and populate it.

--Creating a Database
CREATE DATABASE [FULL_Reco]
Go
ALTER DATABASE [FULL_Reco] SET RECOVERY FULL
GO
 
--First FULL backup
BACKUP DATABASE [FULL_Reco] TO  DISK = N'F:\Backups\FULL_Reco_DeleteMe_FULL.bak'
WITH NOFORMAT, NOINIT,  NAME = N'FULL_Reco-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

--Create another Database to insert the records from
CREATE DATABASE [Sample]
Go
ALTER DATABASE [Sample] SET RECOVERY FULL
GO

--Create a table and populate records
USE Sample
Go
CREATE TABLE InstanceInfo (Servername varchar(30) default 'SampleServer')
GO
INSERT INTO InstanceInfo DEFAULT VALUES
GO 11272

Now is the time to start the minimally logged operation “SELECT..INTO” against the database [FULL_Reco]. Note that the table [InstanceInfo] in the [Sample] database has 11272 records after the above code was run.

USE FULL_Reco
GO
SELECT Servername
INTO[FULL_Reco].[dbo].[InstanceInfo]
FROM[Sample].[dbo].[InstanceInfo]

11272 rows were inserted into the [InstanceInfo] table in the [FULL_Reco] database. Now, let’s turn towards the number of log records this operation generated. An undocumented function called fn_dblog is used to read the T-log file.

Caution: This undocumented function shouldn’t be run in production unless you very well understand it and that it’s very much needed to be executed.

Below is the partial screenshot of the result after the fn_dblog is executed.

It returned 12466 rows, with operations “LOP_FORMAT_PAGE” (each entry corresponding to the entire page image) and “LOP_INSERT_ROWS” with context “LCX_HEAP” accounting for most, which roughly translates to “a record inserted into a HEAP table”. This is expected behavior from a database running under the full recovery model. The reason being, everything is fully logged. Hence, the SELECT..INTO is fully logged under the Full recovery model.

It should also be noted that Fully logging under the full recovery model is different for different operations i.e. if the SELECT..INTO has any identity values then it’s logging of every row whereas, without identity values, it is logging of the full pages.

Bulk-Logged Recovery Model

Let’s proceed with the second half of this article and test the [SELECT..INTO] operation against a database running under the Bulk-Logged recovery model. Repeating the same example above but this time, against a separate database running under the Bulk-Logged recovery model.

--Database creation
CREATE DATABASE [TestBulkLogged]
Go
ALTER DATABASE [TestBulkLogged] SET RECOVERY BULK_LOGGED
GO
 
--First FULL backup
BACKUP DATABASE [TestBulkLogged] TO  DISK = N'F:\Backups\TestBulkLogged_DeleteMe_FULL.bak'
WITH NOFORMAT, NOINIT,  NAME = N'TestBulkLogged-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

Let’s go ahead and run the minimally logged operation.

Use TestBulkLogged
GO
SELECT Servername
INTO [TestBulkLogged].[dbo].[InstanceInfo]
FROM [Sample].[dbo].[InstanceInfo]

Using undocumented function [fn_dblog] to get the log records the minimally logged operation generated.

This time, it generated only 291 records of which, LOP_INSERT_ROWS accounted for only 15. Hence, less number of records indicates less amount of space used in the T-log file and prevents it from filling up as a result of any bulk transaction.

Full vs Bulk-Logged

Seeing the drastic difference in the log generated by the same operation under full and bulk-logged recovery models, the next obvious question is, “what did it actually log?”, the answer to this is, under the bulk-logged recovery model, only the page allocations are logged in the T-log file for these minimally logged transactions and there is no reference to the actual data contained on those pages.

Bulk-logged is different from the full recovery model in a way that it doesn’t record every row inserted as part of a bulk operation, what it does log is the information about the data page allocations. This is taken care of by an allocation page called Bulk Change Map. This page keeps track of the changed extents affected by bulk operations by changing their bit value to 1. Other than this, there’s nothing that’s logged in the T-log.

Hopefully, this will give some understanding of logging under the two recovery models.

The Importance of a Mentor and Why I Need to Get One

Leave a Reply