Most businesses often face several challenges when it comes to manual SQL tuning. This is where automatic tuning methods come in handy for them.
An application developer might face several issues when it comes to SQL tuning; however, thanks to the Oracle database, there are automatic tuning features that help in the above.
These automatic tuning features have been designed to work well equally for data warehouse type and OLTP applications.
Automatic SQL Tuning Features
The following are the automatic SQL tuning features you find in the Oracle database system;
- ADDM- The full form of ADDM is the Automatic Database Diagnostic Monitor or ADDM. It analyses information collected from the AWR for potential performance issues with the Oracle database that also includes high-load SQL statements.
- The SQL Tuning Advisor- The SQL tuning advisor optimizes the SQL statements that have been recognized as high-load statements. The Oracle Database can automatically identify a problem with SQL statements and incorporates recommendations for tuning with the help of an SQL Tuning Advisor during the maintenance of the system windows. The process looks for methods for improving the plans for the execution of SQL statements with a high load. Businesses can choose to operate the SQL Tuning Advisor at any given workload of SQL statements for boosting its performance.
- The STS or the SQL Tuning Sets- When several SQL statements serve as input for ADDM, SQL Access Advisor, or SQL Tuning Advisor, the system builds and stores the STS. This includes SQL statements with the statistics of their execution and execution context.
- The SQL Access Advisor-Besides the SQL Tuning Advisor, the SQL Access Advisor, offers advice for boosting the server performance with views on materialized indexes, materialized views, and materialized view logs for a specific workload. Generally, the volume of materialized indexes and views, as well as the space that has been given to them, tends to increase, and this, in turn, enhances the performance of SQL queries. The SQL Access Advisor takes into account the trade-offs between query performance and space use. It suggests the most affordable configuration for new and existing materialized indexes and views.
How Can Businesses Boost the Efficiency of SQL Statements?
There are ways via which the business can improve the efficiency of SQL statements. Some of the popular methods are:
Verifying the Statistics of the Optimizer
The SQL query optimizer deploys statistics collected from tables and indexes when it determines the optimal execution plan for statistics. If the statistics have not been collected or if these statistics no longer represent the data stored in the system, then the optimizer will not have adequate data to generate the best plans.
Factors to Consider
Experts in SQL server consulting state the following factors need to be considered for the optimization of SQL queries:
- If one collects statistics from tables in the database, it is advised to collect statistics from all the tables. This holds if the application has SQL statements with performing joins.
- If the statistics of the optimizer present in the data dictionary no longer represent the indexes and tables, you should collect new statistics. There is a way for you to evaluate whether statistics in the dictionary are stale. You can compare the row count or the real cardinality of the table to the value of the DBA_TABLES.NUM_ROWS. Besides the above, if you see crucial data skew on the predicate columns, you should use histograms.
- Evaluate Execution Plan-When writing or tuning any SQL statement in the OLTP environment, the business should have the goal to drive data from the table with the maximum selective filter. This implies fewer rows that are passed to the next step. In case the next step indicates a join, this means there are fewer rows joined. You should check whether there are optimal access paths.
When you are evaluating the execution plan of the optimizer, check the listed points:
- The table from where the data is driven from should have an optimal filter
- The join order for every step gives back the least rows to the following step. The join order should be reflected whenever possible, and it should go to the best filters that have not been used as yet.
- The join method is ideal for the rows that are returned. For instance, the nested loop that joins from the indexes might not be optimal when the SQL statement needs to return several rows.
- The database effectively uses views. You should check the SELECT list to gain access to the view you wish to check.
- There are Cartesian products that are unintentional even with the small tables
- Every table needs to be accessed effectively.
Businesses should check predicates in SQL statements and the row numbers on the table. One should keep a watch for any kind of activity that looks suspicious. For instance, this kind of activity might be complete table scans on tables that have a huge number of rows that have predicates located in the where clause. Businesses should determine the reasons why indexes have not been deployed for selective predicates.
The meaning of a complete table scan does not imply the inefficiency of the system. It is prudent for one to conduct a complete table scan on small tables or to engage in a full table scan for leveraging better join methods like, for instance, hash join for the rows that have been returned.
Consultants of SQL servers state that if any of the above conditions are not optimal, the next feasible step will be to restructure SQL statements or the index available on the table. They sum up by saying that rewriting an ineffective SQL statement is simpler than modifying the statement. If the goal of a specific statement is understood well, it is simple and quick to rewrite the statement that best suits the needs of the business.