Where do we start..? Cost Threshold of Parallelism is simply the cut line for when SQL decides whether to consider using parallelism to execute a query or not. Cost is the measure the query optimiser estimates the query will cost to run. It may still decide not too, but below the value set it won't even consider parallelism.
Cost..? What is cost..? In reality its just a measure against other queries. Maybe 20 years ago the cost actually related to an actual measure. So now its just a value which really depends on your server.
Parallelism is just how SQL can spilt up and query over the available threads so parts of the query can be executed in parallel. But there is overhead in doing this so for smaller queries it doesn't make sense or have any benefit.
Microsoft default value is 5. Which by all accounts is just too low. So the consensus is to set around 40 and monitor and tweak accordingly. This is the key point. You must monitor and tweak. I've been using the 40 value for a while now and normally it does the job. But on a recent system which was built on a server where resources where tight, 40 actually caused some issues making query execution in general slower on the server. We'd migrated DBs from old 2008 R2 to a new SQL 2016 server and in general we saw a decrease in performance despite exact same resources allocated as it s a like for like replacement. Reducing it back to 5 saw a definite increase in query times in general.
This is an ongoing project and we will be tweaking CTOP to see which value offers the best performance so will update when have more info...
Cost..? What is cost..? In reality its just a measure against other queries. Maybe 20 years ago the cost actually related to an actual measure. So now its just a value which really depends on your server.
Parallelism is just how SQL can spilt up and query over the available threads so parts of the query can be executed in parallel. But there is overhead in doing this so for smaller queries it doesn't make sense or have any benefit.
Microsoft default value is 5. Which by all accounts is just too low. So the consensus is to set around 40 and monitor and tweak accordingly. This is the key point. You must monitor and tweak. I've been using the 40 value for a while now and normally it does the job. But on a recent system which was built on a server where resources where tight, 40 actually caused some issues making query execution in general slower on the server. We'd migrated DBs from old 2008 R2 to a new SQL 2016 server and in general we saw a decrease in performance despite exact same resources allocated as it s a like for like replacement. Reducing it back to 5 saw a definite increase in query times in general.
This is an ongoing project and we will be tweaking CTOP to see which value offers the best performance so will update when have more info...
Been working recently with a lot of memory optimised tables utilised for MIS data refresh. Although not exactly geared toward that specific use it can prove a great way to gain speed in the refresh but you really need to understand how memory is affected...
From receiving out of memory errors, and not enough recourses to run queries is something I've agonised over... The solution was implemented by a previous incumbent on a much larger system. With tonnes of memory. Due to a service provider change the system was ported to a large multi tenant SQL box but offered much less resources to the instance. And as data volumes rose so to did pressure on the under lying resources.
The memory optimised tables memory allocation comes out of that allocated to the instance. Example - if your instance has 64GB normally you would see your buffer pool take most. But if your memory optimised tables take 20GB then suddenly the memory available to your buffer pool is curtailed by 20 GB leaving it to take from a pool of 44GB. And if your instance queries are massive during the refresh you could lead to contention. In an ideal world simply throwing more memory would resolve. But what if you can't. You really need to work hard defining what your requirements are for buffer pool and memory optimised tables and work back configuring your system from there... Maybe memory optimised tables are not the solution for your MIS refresh.
I'll add in some queries in due course here. Feel free to ask anything regarding the experience...
From receiving out of memory errors, and not enough recourses to run queries is something I've agonised over... The solution was implemented by a previous incumbent on a much larger system. With tonnes of memory. Due to a service provider change the system was ported to a large multi tenant SQL box but offered much less resources to the instance. And as data volumes rose so to did pressure on the under lying resources.
The memory optimised tables memory allocation comes out of that allocated to the instance. Example - if your instance has 64GB normally you would see your buffer pool take most. But if your memory optimised tables take 20GB then suddenly the memory available to your buffer pool is curtailed by 20 GB leaving it to take from a pool of 44GB. And if your instance queries are massive during the refresh you could lead to contention. In an ideal world simply throwing more memory would resolve. But what if you can't. You really need to work hard defining what your requirements are for buffer pool and memory optimised tables and work back configuring your system from there... Maybe memory optimised tables are not the solution for your MIS refresh.
I'll add in some queries in due course here. Feel free to ask anything regarding the experience...
I have finally decided to take the plunge and add a blog to my site. I always wanted an easy way to share information with visitors and I’m super excited to start this journey. Keep coming back to my site and check for updates right here on the blog.
I'm currently working on 2 cloud database solutions using SQL Server 2014. One AWS and the other in Azure. The AWS is a 2 node Failover cluster using san-less replication for the shared storage on SQL Standard. The Azure solution is a 2 node AlwaysOn Availability Group cluster running SQL Enterprise. More details on each in following blogs...
I'm currently working on 2 cloud database solutions using SQL Server 2014. One AWS and the other in Azure. The AWS is a 2 node Failover cluster using san-less replication for the shared storage on SQL Standard. The Azure solution is a 2 node AlwaysOn Availability Group cluster running SQL Enterprise. More details on each in following blogs...