Change of execution plan in SQL Server: Causes and solutions

Microsoft SQL
Performance Monitor
12/7/2024
Tomasz Chwasewicz
Table of contents

Execution plans - complex maps plotted by the SQL Server optimizer, dictate the most effective data retrieval path. However, the plans are not static. They change and adapt under the influence of many factors, from changes in the amount of data to system updates. Each of the execution plan changes in SQL Server has the potential to significantly affect performance.

Understanding the causes of these changes, and more importantly, understanding how to manage them effectively, translates into maintaining optimal performance in SQL Server environments.

Data Changes vs. Change Execution Plan in SQL Server

One of the most common triggers for execution plan changes in SQL Server is a large change in the amount of data in the database tables. Why? SQL Server uses a specific mechanism; it maintains statistics on the data stored in the database. These statistics are then used by the query optimizer to determine the most efficient way to execute the query. When data is changed—by insertion, deletion, or large-scale updates—these statistics deviate from the current state of the data.

SQL Server sets thresholds that determine how much data in a table must change to automatically update table statistics. Typically, this threshold is about 20% of the rows in the table. When this threshold is exceeded, SQL Server marks the statistics as out of date. The next time you run a query that uses these statistics, the query optimizer can update the statistics first. Then, based on this fresh data, it can generate a new execution plan.

This behavior is basically designed to make execution plans as efficient as possible. However, if the data change is sudden or continuous, it can lead to frequent (and annoying) changes in implementation plans.

Outdated statistics and changing the execution plan in SQL Server

Statistics in SQL Server provide a summary of the distribution of data in tables. This includes information such as the average value, minimum and maximum values, and the distribution of values in each column. These summaries help the optimizer predict how many rows will be affected by the query. This, in turn, affects the chosen execution strategy.

As changes to the execution plan in SQL Server accumulate in the database, the accuracy of these statistics disappears. This drop in relevance can mislead the optimizer. In some cases, the optimizer may choose to scan the index when the index search would be more appropriate, or it may misjudge the number of rows returned by the filter, leading to suboptimal linking methods or incorrect memory allocations.

The consequences of this misinformation are painful:

  • Inefficient query performance: The optimizer can choose a plan that works poorly because it is based on outdated statistics. This can cause queries that were once executed quickly to run slower, consuming more resources such as CPU and I/O, leading to broader system performance issues.
  • Incorrect resource allocation: Incorrect estimates can lead to over- or under-utilization of SQL Server resources. Overestimates can cause the database to reserve more memory than necessary, leaving less available for other processes. Underestimation can lead to insufficient memory allocation, resulting in frequent disk I/O operations that could have been avoided.

Reducing the risk of statistics becoming obsolete

  • Manual update of statistics: Use UPDATE STATISTICS after significant data changes to ensure that statistics accurately reflect the current state of the data, helping to maintain Optimal query performance.
  • Adjust the threshold for updating statistics: Modify the default settings in SQL Server to lower the threshold for automatic update of statistics. This should result in more frequent updates without the need for manual intervention.
  • Implementation of asynchronous statistics updates: Enable AUTO_UPDATE_STATISTICS_ASYNC to allow background updates, preventing query delays and ensuring execution plans are based on the latest data.

Index condition

Index health is one of the most important factors affecting SQL Server's ability to select the most efficient query execution plan. When indexes are well maintained, they provide fast paths to the data that the optimizer uses to construct high-performance plans. However, issues such as index fragmentation can potentially severely impact these decisions, leading to less optimal query performance.

  • Fragmentation: Over time, when data is modified in the database - added, updated, deleted - indexes can become fragmented. This fragmentation means that the logical order of the index data becomes scattered, not conforming to the physical order on the disk. The consequences are as follows:
  • Increased I/O operation time: Fragmented indexes cause the good old SQL Server to perform more I/O operations because the data pages are not continuous. This, in turn, causes the engine to read multiple pages from different locations on the disk.
  • Ineffective execution plans: The optimizer bases its decisions on the physical layout and statistics of the indices. If index fragmentation leads to an incorrect estimate of the cost associated with using the index, the optimizer is more likely to choose to scan the table instead of using the fragmented index.

Solutions:

  • Maintenance of indexes: Regular maintenance tasks, such as rebuilding or reorganizing indexes, can mitigate fragmentation. The choice between redevelopment and reorganization depends on the degree of fragmentation:
    • Rebuilding indexes: This process creates a new version of the index, eliminating fragmentation, reclaiming disk space, and reordering the index rows according to their logical order. Redevelopment is generally more resource-intensive and is recommended when fragmentation is severe.
    • Reorganization of indexes: Less intensive than rebuilding, reorganization physically reorders pages at the sheet level to fit them in logical order. It is a lighter operation and can be performed more often as a preventive measure against fragmentation.
  • Regular monitoring
  • Tools and Features: Use features such as SQL Server Agent to automate the index maintenance process. Tools such as SQL Server Management Studio (SSMS) provide built-in reports and dashboards to monitor the health and performance of indexes.

Modifications of queries and schemas

When you play with the structure of the query or adjust the schema of the database, the waves, like a small pebble thrown into a large pond, affect the entire surface. Even small changes can force SQL Server to generate new execution plans.

  • Change of links: Changing the way tables are linked can change the data retrieval path more dramatically than changing the route of the main highway.
  • Filter Customization: Modifying WHERE clauses not only filters the data in a different way — it can completely change the data retrieval environment.
  • Entering new operations: Adding elements such as GROUP BY or ORDER BY changes the query requirements by requesting data in new ways that can burden the system in different ways.

Modifications to the scheme: Changing the database schema can also have far-reaching consequences:

  • Adjusting indexes: Adding or removing indexes can make it easier to navigate or introduce confusion, affecting the speed and efficiency of data retrieval.
  • Modification of columns: Changing data types, adding NULL constraints, or changing column sizes can affect how data is stored and accessed, requiring new execution plans to effectively handle these changes.
  • Adjusting Constraints and Relationships: Changes to primary keys, foreign keys, or control constraints can change data integrity rules and relationships between tables, affecting the behavior of joins and the validity of existing execution plans.

Lösungen:

  • Testing: Implementing changes in a controlled environment allows you to assess the impact on performance, providing insight into potential real-world issues.
  • Refreshing statistics: Updating the statistics after the modification is made ensures that the optimizer has the most up-to-date data.
  • Ongoing monitoring: Continuously monitoring the performance of queries and execution plans after changes are made helps to quickly identify and correct inefficient executions.

Updates and schedule changes in SQL Server

SQL Server updates are usually followed by modifications to the query optimizer, which is responsible for deciding on the most efficient way to execute queries. The evolution of the optimizer can lead to differences in the way execution plans are determined.

One of the most significant changes that may occur with the update is modification of cardinality estimator. Its task is to predict the number of rows processed by different query operations, which has a great influence on the choice of execution plan. Updates can refine these predictions, which means different plan choices that may not be consistent with previous optimizations.

Solutions:

  • Accurate Testing: Testing should include performance benchmarks to identify any potential regressions.
  • Plan Management: SQL Server provides tools like Query Store to capture and preserve good execution plans. This can be used to force the use of these plans even after an upgrade until a more permanent solution is possible.
  • Phased implementation: Deploying updates incrementally across environments allows teams to monitor their impact and adjust policies as needed.

Changes in server configuration

It is obvious that the way to create and execute execution plans is not indifferent to the configuration settings of the server.

One of the key settings that often affects the choice of execution plan is the maximum degree of parallelism (MAXDOP). It specifies the number of processors that SQL Server can use to execute queries in parallel. By modifying MAXDOP, you are essentially dictating how many CPU cores SQL Server can use for parallel query processing. A lower MAXDOP limits the server to fewer cores, pushing it toward more sequential query processing. Setting a higher MAXDOP encourages the server to use more cores to execute complex queries faster, though this can lead to increased competition between processes if not managed carefully.

However, these changes do not occur in a vacuum. They recalibrate the basic behavior of the SQL Server Query Optimizer. This recalibration can sometimes lead to a scenario where previously cached plans become suboptimal in new settings, prompting SQL Server to recompile queries in the new system configuration. This recompilation, while necessary, can lead to temporary performance drops as the server adjusts to the new settings.

Handling Execution Plan Changes in SQL Server

  • Gradual changes: Make incremental changes to settings, such as MAXDOP, instead of large changes to better assess their impact.
  • Using baselines: Performance baseline values that you configure before you make changes can be helpful in measuring the effectiveness of the configuration change.
  • Take a look at the best practices: Follow Microsoft best practices for configuration settings. SQL Server documentation and community recommendations can provide guidance on this issue.

Sniffing parameters

Sniffing parameters refers to SQL Server optimization behavior in which the query optimizer examines the parameter values passed during the first execution of a stored procedure and creates a plan based on those values. This process is generally positive - it adjusts the execution plan to the specific load of the query. However, there are times and situations where parameter sniffing can lead to performance issues when the same execution plan is reused for vastly different parameter values.

When the initial parameter values are not representative of typical usage, the optimized plan may not be ideal for subsequent runs with different parameters. This can lead to inefficient query performance, longer execution times, and increased resource utilization.

Let's look at an example. If a stored procedure is first executed with a parameter that retrieves a small subset of rows, the query optimizer can select a plan that uses index lookup - very efficient for small datasets. If the same plan is used for a parameter that needs to fetch a larger set of rows, the plan may not scale well, leading to slower performance and higher resource demands.

Dealing with Parameter Sniffing Issues

  • Query Warehouse and Plan Enforcement: The Query Store feature of SQL Server allows you to capture and store detailed query performance data along with query plans. Administrators can manually force the use of specific plans that are known to work well for certain queries, thus avoiding the harmful effects of parameter eavesdropping.
  • Guide to the plans: Plan guides allow SQL Server users to attach query prompts or fixed plans to specific queries, helping to control the choice of execution plan — no direct modification of application code is required.

Related articles