Introduction
You can change server-wide default settings by using facets and properties at server level or by modifying the model system database (which acts as a template for new databases created).
I don’t change much but here are the settings I do change to make life easier for myself.
Note: I fully understand that everything needs to be considered on a case by case basis – I’m just presenting these as possible ideas.
#1 – Backup locations – (In SSMS object explorer, right-click the server >> Facets >> Backup Directory)
When configuring my backup procedures, I like to setup two folders on each database server such as the below;
E:\SQL_Backups_Nightly\ | Used with nightly backup maintenance job. E:\SQL_Backups_Adhoc\ | Used for manual backups/restores.
I then setup the latter as my default backup directory in the server facets pane.
This means that when I do manual backups/restores the dialog box will take me straight to this folder. I find that when restoring a database under pressure, trying to navigate to the default paths (which usually look like below) just adds unnecessary confusion.
\\Servername\e$\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup
#2 – File Locations – (In SSMS object explorer, right-click the server >> Properties)
This is similar to the backup locations above. In Database Settings under Database Default Location I use the following values.
E:\SQL_Server_Data\ F:\SQL_Server_Logs\
Again, they are just easier to find and if you ever have to write MOVE statements and such like, this will make it a lot easier.
For the record, it’s a good idea to have your log files on a separate physical disk to your data files, please see this article for a full explanation.
#3 – Compression for Ad Hoc backups – (In SSMS object explorer, right-click the server >> Properties)
Once in properties, go to Database Settings and tick the box next to Compress Backup.
This will mean that every time you do a manual backup it will automatically be compressed – this saves a lot of disk space and in practical terms there aren’t really any draw backs. This is especially handy when you have to transfer the backup to another server as the bottleneck is usually disk I/O.
#4 – Recovery Model – (Right Click the model system database >> choose properties >> General)
The model database is used as the template for whenever you create a new database on SQL Server.
I always set the recovery model to simple. Unless you are backing up transaction logs and know how to do point-in-time recovery then you are better off doing this.
#5 – Growth Settings – (Right Click the model system database >> choose properties >> General)
If used, the default growth settings are likely going to cause performance problems (from fragmentation and frequent growth events) and is a simple one to fix.
Again, each database should be looked at on a case by case basis but I change the following settings which works for me.
modeldev | initial size: 128mb, autogrowth 128mb, unrestricted modellog | initial size: 128mb, autogrowth 10%, unrestricted
#6 – Max memory settings (In SSMS object explorer, right-click the server >> Properties)
On the memory tab you can set minimum and max memory allowed.
I usually leave at least 4gb for the OS and then some more if there are other applications/services running on it.
I’ve seen blog posts mentioning that this setting is much more involved than it first appear but I have successfully used these settings to improve performance for other services hosted on the same server and they allow legitimate high memory monitoring/alerting to be setup.
#7 – Maximum Degree of Parallelism (In SSMS object explorer, right-click the server >> Properties)
On the advanced tab under Parallelism you can configure Max DOP.
“By default, SQL Server will use all available CPUs during query execution. While this is great for large queries, it can cause performance problems and limit concurrency. A better approach is to limit parallelism to the number of physical cores in a single CPU socket.”
Essentially this setting can help you avoid problems when your server is experiencing ‘thrashing’ from a large query that is hogging all the resources so simple, small queries take longer than they should. Please see this article for full info.
Conclusion
There are loads more things that you can change but this blog post is just to cover what I think is good practice and what gives effective results for not much effort.
Note: I am only a makeshift DBA out of necessity– good DBAs are very hard to find but are worth every their weight in gold.
All of these recommendations are based on my experience of working with SQL 2008 R2.