Ad hoc update to system catalogs is not supported.

Today someone handed over to me a server that was partially set up so that I could finish the task.

One thing yet to configure was the Database Mail settings. We use execute script on each new server to configure Database Mail so that it can send server alerts to our inboxes.

The first part of the script is simple enough. It reconfigures SQL Server to enable the database mail stored procedures.

The batch looks like this:

EXEC msdb.dbo.sp_configure
@configname = ‘Database Mail XPs’,
@configvalue = 1;


Today the RECONFIGURE statement failed with an error I’ve never seen before. The output of the batch looked like this:

Configuration option ‘Database Mail XPs’ changed from 0 to 1. Run the RECONFIGURE statement to install.
Msg 5808, Level 16, State 1, Line 2
Ad hoc update to system catalogs is not supported.

You normally see an error like this when you try to do something silly like directly modify the system tables:

DELETE FROM sys.objects;

The server will respond with a similar but distinct error:

Msg 259, Level 16, State 1, Line 1
Ad hoc updates to system catalogs are not allowed.

This error is a good thing. It protects us from making careless updates that trash the system.

But why should I see this error when I call sp_configure? By using the procedure, I’m asking the server to modify system tables on my behalf.

Nakul Vachhrajani at [Beyond Relational]( has faced the same problem, and documented how to fix it.

There is a deprecated SQL Server setting called ‘[allow updates](’ that is set to 0 by default. When it is set to 1, it causes the RECONFIGURE statement to fail.

To fix the issue, I executed this batch on the server:

EXECUTE sys.sp_configure
@configname = ‘allow updates’,
@configvalue = 0;


It produced the usual output message, and no errors:

Configuration option ‘allow updates’ changed from 1 to 0. Run the RECONFIGURE statement to install.

Why didn’t the reconfigure statement raise an error this time? I’m unsure, but it looks like the effect of sp_configure takes place immediately for this setting.

Querying sys.configurations shows ‘allow updates’ is now set to 0:

FROM sys.configurations
WHERE name = N’allow updates’;

configuration_id name value minimum maximum value_in_use description is_dynamic is_advanced
102 allow updates 0 0 1 0 Allow updates to system tables 1 0

Now I can set up Database Mail!

Ola Hallengren’s Maintenance Solution

A colleague asked me today to help him solve a maintenance issue that is affecting performance in the data warehouse.

Our server’s indexes are heavily fragmented and our statistics are months out of date in some places. Slow data retrieval makes it more difficult for the BI team to meet its operational KPIs because it takes longer to process data.

Well-organized indexes and fresh statistics are very important for quick data retrieval.

Without regular maintenance of these objects, SQL Server still works, but it eventually retrieves data very slowly as it rummages around the messy data. Think of trying to find a book in a library when all the books are scattered on the floor.

So what’s a busy DBA to do? Leverage the work of others: use Ola Hallengren’s Maintenance Solution!

Intelligent Index Maintenance

The SQL Server Maintenance Solution lets you intelligently rebuild or reorganize only the indexes that are fragmented. In the IndexOptimize procedure, you can define a preferred index maintenance operation for each fragmentation group. Take a look at this code:

EXECUTE dbo.IndexOptimize @Databases = ‘USER_DATABASES’,

@FragmentationLow = NULL,



@FragmentationLevel1 = 5,

@FragmentationLevel2 = 30

In this example, indexes that have a high fragmentation level will be rebuilt, online if possible. Indexes that have a medium fragmentation level will be reorganized. Indexes that have a low fragmentation level will remain untouched.

Update Statistics

The IndexOptimize procedure can also be used to update statistics. You can choose to update all statistics, statistics on indexes only, or statistics on columns only. If an index is rebuilt, the statistics is not being updated. You can also choose to update the statistics only if any rows have been modified since the most recent statistics update.

EXECUTE dbo.IndexOptimize @Databases = ‘USER_DATABASES’,

@FragmentationLow = NULL,



@FragmentationLevel1 = 5,

@FragmentationLevel2 = 30,

@UpdateStatistics = ‘ALL’,

@OnlyModifiedStatistics = ‘Y’

Ola Hallengren is a SQL Server DBA working for a Scandinavian bank. He gave a talk about his maintenance solution at SQLBits a couple of years ago.

I’m keen to try it out. If it’s good enough for all these companies that you’ve possibly heard of, then it’s good enough for me.

Plus, it’s free!

Restarting SQL Server to fix msdb

My laptop screwed up badly today. After several failed restarts, my desktop was stable and somewhat responsive, so I tried to continue normally.

SSMS’s Object Explorer is now broken. Expanding any node produces this error message:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

Database ‘msdb’ cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. (Microsoft SQL Server, Error: 945)

To fix this, I had to restart SQL Server.

Start a new PowerShell session as Administrator and run the following:

Stop-Service MSSQLSERVER -Force

Restart SSMS and the Object Explorer will work again.

Working with multiple versions of SMO in PowerShell


There are lots of ways to load the SMO assembly in PowerShell.

You can use the PowerShell V1 method:

[void] [Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo');

This loads the most recent version of the assembly. It doesn’t provide a way to specify a version. For these reasons, the method has been obsoleted in the .NET API for a long time.

The PowerShell V2 way is to use the Add-Type cmdlet. According to Max Trinidad, this works well in an environment with one version of SQL Server:

Add-Type -AssemblyName 'Microsoft.SqlServer.Smo';

I don’t work in such an environment, so the above command fails with an error.

You can specify the version to the Add-Type cmdlet, but you also have to specify the Culture and the PublicKeyToken of the underlying assembly.

I don’t know much about how .NET assemblies work. I do know that for this particular assembly, the Culture is always neutral, and the PublicKeyToken is always 89845dcd8080cc91.

Because the only attribute that varies is the version number, we can use this simple function to load an assembly of a specific version:

function Add-Smo ($Version) {

  Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=$Version, Culture=neutral, PublicKeyToken=89845dcd8080cc91";



Add-Smo -Version '';


New-Object 'Microsoft.SqlServer.Management.Smo.Server';

In my environment, the first few lines of output of the above script looks like this:



AuditLevel                  : Failure

BackupDirectory             : C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup

BrowserServiceAccount       : NT AUTHORITY\LOCAL SERVICE

BrowserStartMode            : Disabled

BuildClrVersionString       : v2.0.50727

BuildNumber                 : 5500

Collation                   : Latin1_General_CI_AS

CollationID                 : 53256

ComparisonStyle             : 196609


In my environment, the valid values of the Version parameter are ‘’ and ‘’. For you, it may vary.

To use this in another script, you might just want to copy the function body and hard-code the version number you need.

I don’t know how to create PowerShell function libraries yet, but I might like to add this to my own library of SQL Server functions.

Loading a flat file into a table

Follow these steps to load a Google Docs flat file export into a temporary table in an SQL Server database using the SQL Server Import and Export Wizard.

Flat files are a portable way to share data, but it’s difficult to analyze and play with the data until it’s in a database. This guide shows you how to do it using a SQL Server tool.

In this example, I’m going to load urban area population data scraped from Wikipedia.

Acquire the flat file

To prepare for this example, I already extracted data from the English Wikipedia’s article on the largest urban areas of the European Union using Google Docs. My guide on extracting data from Wikipedia using Google Docs shows you how to do the same.

On my own computer, the file is saved to “C:\Users\iain.CORP\Blog\Largest urban areas of the European Union – ImportHtmlOutput.tsv”. Remember your path, because you’ll need it soon.

For your convenience, I have uploaded a copy of the file. The XLSX extension is a workaround for’s stupid file extension filter. Remove the XLSX extension before working with this file.

Start the import process

Start the SQL Server Import and Export Wizard by running the following command:


The SQL Server Import and Export Wizard dialog should appear. Continue to configure the first panel.

Choose a data source

Follow these steps to configure the first panel of the dialog:

  1. Select ‘Flat File Source’ as your data source and wait for the interface to load.
  2. Paste without quotes the full path of your flat file into the file name field.
  3. Select ‘English (United Kingdom)’ as your locale. (Default for me; others may work.)
  4. Specify the file encoding. For UTF-8 files, such as those exported from Google Docs, select ‘65001 (UTF-8)’ as your code page. Leave the Unicode box unchecked. (In Windows-speak, Unicode means little-endian UTF-16.)
  5. Select ‘Delimited’ as your format. (Default)
  6. Set ‘<none>’ as your text qualifier. (Default)
  7. Set ‘{LF}’ as the header row delimiter. (The default Windows line ending is CRLF, but Google uses the Unix default.)
  8. Set header rows to skip as 0.
  9. Check ‘Column names in the first data row’.

The dialog should look like this:


In a serious ETL project, where imports would be regular and automated, you would want to specify precisely the data types in each field of your file to reject bad data files early. For this quick-and-dirty task, the defaults are fine: everything is a string up to 50 characters long.

Click Next until you get to choose a destination.

Choose a destination

Follow these steps to configure the dialog:

  1. Select ‘SQL Server Native Client 11.0’ as your destination, or your nearest match if you’re not using SQL Server 2012.
  2. Set ‘.’ as your server name if you are working locally; otherwise put the name of your server instance here.
  3. Use Windows Authentication. (Unless you have to use SQL Server Authentication for some reason.)
  4. Select tempdb as your database.

The dialog should look like this:


Using tempdb indicates to users that the data is not a permanent part of any schema. It also means the table will no longer exist after the server instance is restarted, so you don’t need to remember to drop it on a development machine.

It’s not a good idea to create tables like this in a user database, because it will confuse other users and could end up becoming part of your version-controlled schema, depending on how you use your tools.

Click Next to define the destination table.

Define a destination table

You should see the ‘Select Source Tables and Views’ panel, which looks like this:


By default, the destination table has the same name of the file being imported. With a good query-writing tool like SQL Prompt, this should not cause any difficulties, but it might be cumbersome to read. If so, you can rename it to something shorter: for example, [dbo].[Largest Urban Areas of the European Union] could become [EU_Urban_Areas], as above.

To rename the destination table, click the name in the destination column and type a new name.

Click ‘Edit Mappings…’ to open the Column Mappings dialog.

Change the type of every column to nvarchar to preserve all the character data. If you don’t do this, the database will decode the bytes using its default code page, which will most likely mangle anything above code point 127 (limit of US-ASCII).

The default settings for everything else should be fine.

The dialog should look like this:


Click OK.

Click Next on the main dialog to review your configuration.

Review configuration and error-handling

The review panel will warn about possible data loss or errors for each column using a yellow triangle. It looks like this:


In this example, it’s safe to ignore the warnings.

The default behaviour to handle and error or a truncation is to fail. You can opt to ignore this on a per-row basis. I like to get all the data or none of it, so I don’t change the default.

If your data is particularly poorly formatted, you might want change the setting to ‘Ignore’ to import just the good rows so that you can so you have something to play with before devising a data-cleansing strategy.

Click Next to output options.

Import the data

You should see the ‘Save and Run Package’ dialog, which looks like this:


The wizard is about to run a SSIS package to perform the data import task. You could create the same package using the Business Intelligence Development Studio, but this the wizard is much easier for one-off tasks like this.

You might want to adapt the wizard’s package to create a component for a serious ETL project, so you can save the package.

But for this example, it’s okay just to run and forget.

Check ‘Run immediately’ and click Next to see a complete summary of the actions that the wizard will perform:


Click Finish, finally, to import the data!

If all goes well, you should see a succession of green ticks appear in a list like this:


You have successfully imported the data from the flat file.

Click Close to dispose of the dialog.

Inspecting the data

Start SQL Server Management Studio, connect to the destination database, and open a new query window using tempdb.

Run the following query to inspect the data:


FROM dbo.EU_Urban_Areas;

The result set should look like this:


Success! You can now query the data using SQL.

You can see from the screenshot that everything is stored as a string, even the numerical data. To perform analytical work on this data, you’ll have to import it into a table with the correct data types. But that’s for another post!

MCTS Learning Resources

A Microsoft-certified colleague today pointed me towards some learning resources that could be useful for the MCTS SQL Server 2008 Database Developer exam 70-433 that I’m going to sit next week.

Both Transcender and MeasureUp offer 30-day online access to mock exams. Allegedly, they are even harder than the Microsoft questions, so if you can pass these, then you really know your stuff.

They are expensive, though, and I’m not sure I can justify the cost. Prometric already charges £99 to sit the exam itself.

MeasureUp charges £52.00 for 150 covering the following SQL Server knowledge areas:

Implementing Tables and Views – 21 questions
Implementing Programming Objects – 24 questions
Working with Query Fundamentals – 31 questions
Applying Additional Query Techniques – 23 questions
Working with Additional SQL Server Components – 17 questions
Working with XML Data – 18 questions
Gathering Performance Information – 16 questions

Transcender charges $109.00, which today on is about £68, for a similar offering:

168 practice exam questions
298 free TranscenderFlash cards
3 year license
Pass Guarantee

The Pass Guarantee claims “Pass the Exam the First Time—Or Your Money Back!”.

Today Amazon is selling the official Microsoft self-paced training kit for £35.74. The official kit includes a CD with the same software used at the test centres, with sample questions set by Microsoft. I already have a copy of this, and think it’s all the resource I need to pass this exam specifically.

The questions on the exam demo CD show you the areas your knowldge has to cover. It’s up to you to discover the answers in external resources.

Books Online is usually the first stop, because it’s Microsoft’s official documentation for the product. For a deeper understanding of fundamentals, Itzik Ben-Gan’s T-SQL Fundamentals is essential.

Effect of `SET NOCOUNT ON` on `@@ROWCOUNT`

From the @@ROWCOUNT documentation:


Which means that if you get into the good habit of beginning all your stored procedure bodies with SET NOCOUNT ON;, you’ll be in for a nasty surprise when you code a trigger with short-curcuit logic.

For example, in this audit trigger, the INSERT statement will never execute:

CREATE TRIGGER dbo.Wibble_InsertAuditRow
ON dbo.Wibble


  INSERT INTO Audit.WibbleHistory (

(Let’s assume that Audit.WibbleHistory has extra columns with default values that audit when and by whom each change is made.)

Because SET NOCOUNT ON sets @@ROWCOUNT to zero, the do-nothing code path will always execute.

To avoid this, make sure to check @@ROWCOUNT before setting NOCOUNT.

sp_change_users_login and ALTER USER

Crusty old SQL Server DBAs know all sorts of stored procedures for fixing up databases. A very useful one is sp_change_users_login, which you can use to map a database user to a server login after restoring a database from a backup.

You will need to do this when rebuilding a database server from scratch, because the backup does not store information about logins or any of the other server-level assets.

If you have to map the server login appuser to the SuperApp database user appuser, and you want to stay crusty, you can use

EXECUTE SuperApp.dbo.sp_change_users_login
  @Action = 'Auto_Fix',
  @UserNamePattern = N'appuser';

But you should prefer the new T-SQL language constructs, because the stored proc is going to be removed in a future version of SQL Server, and it is confisingly overloaded – it can change mapping or report on mapping depending on what value you pass to the @Action parameter.

The shiny new way top do the same is the ALTER USER statement:

ALTER USER appuser WITH LOGIN = appuser;


This was originally posted at the scotunes blog. I wrote this before I tried to install PostgreSQL.

In my day job I use SQL Server. But I’ve always wanted to learn another database system, because I believe knowing more than one will make me a better developer on both platforms.

When selecting a database platform for my latest data analysis project, I identified the following requirements: automatic source control, simple builds and deployment, easy unit testing, and spatial support, and an affordable total cost.

The PostgreSQL community seems at least as active as the SQL Server community in providing these features. I guess you get more options in the open source world. For each tool class below, the example tool is just the first I heard a reasonable recommendation for – there is at least one more. All the PostgreSQL tools are open source.

For source control, builds and schema deployment, apgdiff. Compare SQL Compare in the SQL Server world, which is great, but too expensive for personal use.

For a unit testing framework implemented within the database, pgTap. Compare tSQLt for SQL Server. tSQLt is open source, but immature.

For spatial support, PostGIS. SQL Server understands spatial data out the box. (It’s hardly a core component of a relational database system, but I can only applaud, because spatial queries are very powerful and look cool.)

Because I’ve not implemented the database part of my project yet, I’ve got nothing to lose by choosing an unfamilar technology except a couple of drunken weekends. And there is so much to gain by learning a new system.

That’s why I’ve chosen PostgreSQL.

Can’t backup a database with compression in standard edition

When I tried to run the following script, generated by the ‘Back Up Database’ dialog:

BACKUP DATABASE [MigrationSubject]
TO  DISK = N'\NEWHOMETempMigrationSubject_Full.bak'
  DESCRIPTION = N'Backup for migration from LEGACYBOX to NEWHOME.',
  NAME = N'MigrationSubject-Full Database Backup',
  STATS = 10;

I received the following error:

Msg 1844, Level 16, State 1, Line 1
BACKUP DATABASE WITH COMPRESSION is not supported on Standard Edition (64-bit).

The solution was to remove the COMPRESSION option from the WITH clause, which means the default server setting is used.