Tempdba Company Logo
Call us: 0044 770 332 3364
No tie in contracts,  no hidden fees, just simple database consultancy, when you need it, where you need it..!

Blog

Tempdba All things SQL

By P MAGUIRE 12 Jan, 2018
Where do we start..? Cost Threshold of Parallelism is simply the cut line for when SQL decides whether to consider using parallelism to execute a query or not. Cost is the measure the query optimiser estimates the query will cost to run. It may still decide not too, but below the value set it won't even consider parallelism.

Cost..? What is cost..? In reality its just a measure against other queries. Maybe 20 years ago the cost actually related to an actual measure. So now its just a value which really depends on your server.

Parallelism is just how SQL can spilt up and query over the available threads so parts of the query can be executed in parallel. But there is overhead in doing this so for smaller queries it doesn't make sense or have any benefit.

Microsoft default value is 5. Which by all accounts is just too low. So the consensus is to set around 40 and monitor and tweak accordingly. This is the key point. You must monitor and tweak. I've been using the 40 value for a while now and normally it does the job. But on a recent system which was built on a server where resources where tight, 40 actually caused some issues making query execution in general slower on the server. We'd migrated DBs from old 2008 R2 to a new SQL 2016 server and in general we saw a decrease in performance despite exact same resources allocated as it s a like for like replacement.  Reducing it back to 5 saw a definite increase in query times in general.

This is an ongoing project and we will be tweaking CTOP to see which value offers the best performance so will update when have more info...
By P MAGUIRE 11 Jan, 2018
With SQL 2016 AlwaysOn is your 1 shop for everything HA and DR. But with a cost - enterprise licensing. With SQL 2016 Microsoft have release Basic Availability Groups. Think of this as the, long awaited, replacement to SQL Mirroring which has been a deprecated feature since SQL 2012. 

So what is it..? Think of it as a watered down version of full Availability Groups, which mirrors (excuse pun) the functionality of SQL Mirroring but enhances it a bit. And its available with Standard Edition licensing. It works, with its limitations, and if that's all you need then perfect. It uses Windows Clustering for the availability and listener pieces so its a trusted technology. Barring the limitations its great...

Limitations
Single DBs in single groups
Secondary's are not directly accessible
Cannot offload backups etc to secondary

Unless you have a customer that wants HA DR in SQL 2016 but doesn't want to pay Enterprise licensing. Well you can use Agent jobs, alerts and scripts to mimic some of the behaviour for full AlwaysOn Availability Group but it comes with a warning... 

Will follow up with further material or contact for more info...
By P MAGUIRE 11 Jan, 2018
Been working recently with a lot of memory optimised tables utilised for MIS data refresh. Although not exactly geared toward that specific use it can prove a great way to gain speed in the refresh but you really need to understand how memory is affected...

From receiving out of memory errors, and not enough recourses to run queries is something I've agonised over... The solution was implemented by a previous incumbent on a much larger system.  With tonnes of memory. Due to a service provider change the system was ported to a large multi tenant SQL box but offered much less resources to the instance. And as data volumes rose so to did pressure on the under lying resources.

The memory optimised tables memory allocation comes out of that allocated to the instance. Example - if your instance has 64GB normally you would see your buffer pool take most. But if your memory optimised tables take 20GB then suddenly the memory available to your buffer pool is curtailed by 20 GB leaving it to take from a pool of 44GB. And if your instance queries are massive during the refresh you could lead to contention. In an ideal world simply throwing more memory would resolve. But what if you can't. You really need to work hard defining what your requirements are for buffer pool and memory optimised tables and work back configuring your system from there... Maybe memory optimised tables are not the solution for your MIS refresh.

I'll add in some queries in due course here. Feel free to ask anything regarding the experience...
By P MAGUIRE 15 Mar, 2016
All things known of SQL 2016
By P MAGUIRE 15 Mar, 2016
SQL Clustering in AWS using san-less replication
By P MAGUIRE 14 Mar, 2016
I have finally decided to take the plunge and add a blog to my site. I always wanted an easy way to share information with visitors and I’m super excited to start this journey. Keep coming back to my site and check for updates right here on the blog.

I'm currently working on 2 cloud database solutions using SQL Server 2014.  One AWS and the other in Azure.  The AWS is a 2 node Failover cluster using san-less replication for the shared storage on SQL Standard.  The Azure solution is a 2 node AlwaysOn Availability Group cluster running SQL Enterprise.  More details on each in following blogs...
By P MAGUIRE 12 Jan, 2018
Where do we start..? Cost Threshold of Parallelism is simply the cut line for when SQL decides whether to consider using parallelism to execute a query or not. Cost is the measure the query optimiser estimates the query will cost to run. It may still decide not too, but below the value set it won't even consider parallelism.

Cost..? What is cost..? In reality its just a measure against other queries. Maybe 20 years ago the cost actually related to an actual measure. So now its just a value which really depends on your server.

Parallelism is just how SQL can spilt up and query over the available threads so parts of the query can be executed in parallel. But there is overhead in doing this so for smaller queries it doesn't make sense or have any benefit.

Microsoft default value is 5. Which by all accounts is just too low. So the consensus is to set around 40 and monitor and tweak accordingly. This is the key point. You must monitor and tweak. I've been using the 40 value for a while now and normally it does the job. But on a recent system which was built on a server where resources where tight, 40 actually caused some issues making query execution in general slower on the server. We'd migrated DBs from old 2008 R2 to a new SQL 2016 server and in general we saw a decrease in performance despite exact same resources allocated as it s a like for like replacement.  Reducing it back to 5 saw a definite increase in query times in general.

This is an ongoing project and we will be tweaking CTOP to see which value offers the best performance so will update when have more info...
By P MAGUIRE 11 Jan, 2018
With SQL 2016 AlwaysOn is your 1 shop for everything HA and DR. But with a cost - enterprise licensing. With SQL 2016 Microsoft have release Basic Availability Groups. Think of this as the, long awaited, replacement to SQL Mirroring which has been a deprecated feature since SQL 2012. 

So what is it..? Think of it as a watered down version of full Availability Groups, which mirrors (excuse pun) the functionality of SQL Mirroring but enhances it a bit. And its available with Standard Edition licensing. It works, with its limitations, and if that's all you need then perfect. It uses Windows Clustering for the availability and listener pieces so its a trusted technology. Barring the limitations its great...

Limitations
Single DBs in single groups
Secondary's are not directly accessible
Cannot offload backups etc to secondary

Unless you have a customer that wants HA DR in SQL 2016 but doesn't want to pay Enterprise licensing. Well you can use Agent jobs, alerts and scripts to mimic some of the behaviour for full AlwaysOn Availability Group but it comes with a warning... 

Will follow up with further material or contact for more info...
By P MAGUIRE 11 Jan, 2018
Been working recently with a lot of memory optimised tables utilised for MIS data refresh. Although not exactly geared toward that specific use it can prove a great way to gain speed in the refresh but you really need to understand how memory is affected...

From receiving out of memory errors, and not enough recourses to run queries is something I've agonised over... The solution was implemented by a previous incumbent on a much larger system.  With tonnes of memory. Due to a service provider change the system was ported to a large multi tenant SQL box but offered much less resources to the instance. And as data volumes rose so to did pressure on the under lying resources.

The memory optimised tables memory allocation comes out of that allocated to the instance. Example - if your instance has 64GB normally you would see your buffer pool take most. But if your memory optimised tables take 20GB then suddenly the memory available to your buffer pool is curtailed by 20 GB leaving it to take from a pool of 44GB. And if your instance queries are massive during the refresh you could lead to contention. In an ideal world simply throwing more memory would resolve. But what if you can't. You really need to work hard defining what your requirements are for buffer pool and memory optimised tables and work back configuring your system from there... Maybe memory optimised tables are not the solution for your MIS refresh.

I'll add in some queries in due course here. Feel free to ask anything regarding the experience...
Show More
Share by: