Tuesday 17 April 2012

Initialization Parameters influencing performance

Initialization parameters which influence the performance of the application are as follows
  • OPTIMIZER_DYNAMIC_SAMPLING
  • OPTIMIZER_FEATURES_ENABLE
  • OPTIMIZER_INDEX_CACHING
  • OPTIMIZER_INDEX_COST_ADJ
  • OPTIMIZER_MODE
  • DB_CACHE_SIZE
  • DB_KEEP_CACHE_SIZE
  • DB_RECYCLE_CACHE_SIZE
  • SHARED_POOL_SIZE
  • CURSOR_SHARING
  • DB_CACHE_ADVICE
  • DB_FILE_MULTIBLOCK_READ_COUNT
  • DB_N_CACHE_SIZE
  • HASH_AREA_SIZE
  • QUERY_REWRITE_ENABLED
  • QUERY_REWRITE_INTEGRITY
  • RESOURCE_LIMIT
  • RESOURCE_MANAGER_PLAN
  • SESSION_CACHED_CURSORS
  • SORT_AREA_SIZE
  • STATISTICS_LEVEL
  • WORKAREA_SIZE_POLICY
OPTIMIZER_DYNAMIC_SAMPLING


The OPTIMIZER_DYNAMIC_SAMPLING initialization parameter controls the number of blocks read by the dynamic sampling query. The parameter can be set to a value from 0 to 10.

Optimizer dynamic sampling refers to the ability of the SQL optimizer to take a sample of rows from a table to calculate missing statistics.

Some of the sampling levels that can be used;

level 0 - do not use dynamic sampling

level 1 - Sample all tables that have not been analyzed if the following criteria are met: (1) there is at least 1 unanalyzed table in the query; (2) this unanalyzed table is joined to another table or appears in a sub query or non-mergeable view; (3) this unanalyzed table has no indexes; (4) this unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).

level 2 (default)- Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is two times the default number of dynamic sampling blocks.

level 4 - Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.

levels 5, 6, 7, 8, and 9 - Apply dynamic sampling to all tables that meet the previous level criteria using 2, 4, 8, 32, or 128 times the default number of dynamic sampling blocks respectively.

level 10 - sample all blocks in the table

If you are using the default (which is 2) for OPTIMIZER_DYNAMIC_SAMPLING then dynamic sampling is only used for unanalyzed tables.
Setting it to 0 (off) will result in a default value of stats for unanalyzed tables.

Since Dynamic Sampling is used to analyze the un analyzed Tables and therefore facilitates the optimizer in getting the optimized execution plan based on the statistics available on the tables. Since 9i, the Oracle optimizer is a cost based optimizer and therefore relies on the statistics of the table in finding the optimized Execution plan.


OPTIMIZER_FEATURES_ENABLE


OPTIMIZER_FEATURES_ENABLE acts as an umbrella parameter for enabling a series of optimizer features based on an Oracle release number.

You can browse the Oracle Documentation for more info the different features available with different releases of Oracle.

Whenever you find the performance of an application deteriorating after an upgrade, you can set the optimizer_features_enable parameter to a prior release of Oracle so that until you find the root cause of the problem, we can have the Optimizer use the features of a prior release but generally its not recommended because it is something like even though you upgrade a database you only use the features of a prior release.


OPTIMIZER_INDEX_CACHING


OPTIMIZER_INDEX_CACHING lets you adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.

The cost of executing an index using an IN-list iterator or of executing a nested loops join when an index is used to access the inner table depends on the caching of that index in the buffer cache. Setting this parameter to a higher value makes nested loops joins and IN-list iterators look less expensive to the optimizer to pick nested loops joins over hash or sort-merge joins and to pick indexes using IN-list iterators over other indexes or full table scans.

The setting of the OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ parameters will not make the plans run faster. It just affects which plan is chosen based on the cost of each plan.Just changing these parameters will not cause the same plan to go faster (or slower). It will just change the COSTS associated with the plan. It might result in a DIFFERENT PLAN being chosen based on the costs and that is where you would see performance differences.

Its better that the parameters be in the default settings.

OPTIMIZER_INDEX_COST_ADJ


OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly—that is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

A small value for this parameter will favour the use of the indexes instead of a full table scan as the cost incurred using the index will be less when compared to a full table scan.

Modifying this parameter could significantly degrade the performance of some queries if the perceived index cost is reduced to an unrealistic level and indexes are chosen when other access methods are more appropriate.

The optimizer_index_cost_adj parameter defaults to a value of 100, but it can range in value from one to 10,000.



OPTIMIZER_MODE


OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.

•first_rows_n
The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

•first_rows
The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.

•all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).

When Optimizer mode is set CHOOSE, its up to the optimizer to choose a cost based optimization approach or a rule based optimization approach and this is biased on the statistics which are present on the database objects which are queried.

The Rule Based Optimizer (RBO) is now obsolete in Oracle 10g. The functionality is still present but no new functionality has been included in it and it is no longer supported by Oracle. It is only present to provide backwards compatibility during the migration to the query optimizer (Cost Based Optimizer). The results of this obsolescence are as follows.

•The CHOOSE and RULE options for the OPTIMIZER_MODE parameter still exist but are no longer supported.
•The default value for the OPTIMIZER_MODE parameter is ALL_ROWS.
•The CHOOSE and RULE optimizer hints still exist but are no longer supported.
•Code requiring the RBO must be migrated to use the query optimizer.

DB_CACHE_SIZE


DB_CACHE_SIZE specifies the size of the DEFAULT buffer pool for buffers with the primary block size (the block size defined by the DB_BLOCK_SIZE initialization parameter).

Staring in Oracle9i we also have separate database cache size parameters for multiple block sizes, including db_nk_cache_size.

The database writer (DBWR) process must sweep all of the blocks in db_cache_size when performing an asynchronous write. Having a huge db_cache_size can cause excessive work for the database writer.

Whenever there is a physical read of data from the data files, the data is first cached in the Database buffer cache, so that on subsequent access of the same data an be fetched from the cache instead of reading from the data files again. This results in a good performance because the data is read from cache instead of from the physical data files. The Data located in the Database Buffer Cache work on the LRU Algorithm and the Least recently used blocks are flushed from the cache , which might result in a physical data read when this data is requested again. Therefore it is necessary that this cache size is appropriately sized based on the recommendations from the AWR Report.

We can also make use of the DB_CACHE_ADVICE view to determine the appropriate size of the DB_CACHE_SIZE Parameter to prevent aging out of the necessary blocks from the cache.

DB_KEEP_CACHE_SIZE

db_cache_size + db_keep_cache_size + db_recycle_cache_size = totally buffer cache size.

This parameter states the size which is required for pinning the objects to the shared pool so that they are not aged out of memory. Small frequently used look up tables can be pinned into the memory so that cost incurred in making a physical read frequently is reduced because of the data accessed from the cache.


DB_RECYCLE_CACHE_SIZE


DB_RECYCLE_CACHE_SIZE specifies the size of the RECYCLE buffer pool. This pool is reserved for table blocks of large tables on which full table scans are performed.

SHARED_POOL_SIZE


The shared pool can further be subdivided into:

*    Control structures
*    Character sets
*    Dictionary cache:-The dictionary cache stores parts fo the data dictionary because Oracle has to query the data dictionary very often as it is fundamental to the functioning of Oracle.

Oracle needs to allocate & deallocate memory as SQL or procedural code is executed based on the individual needs of users' sessions and in accordance to the LRU algorithm.

*    Library cache
    The library cache is further divided into
        Shared SQL Area,
        PL/SQL Procedures and
        Control Structures (Latches and Locks).

Highly accessed memory structures that provide information on object structures to SQL statements being parsed.

CURSOR_SHARING


EXACT: The database doesn't replace any literals, and the optimizer sees the query as it's presented to the optimizer. The optimizer generates a different plan for each execution of the statement, based on the literal values in the statement. The plan would thus be an optimal one, but each statement has its own parent cursor, and therefore a statement that's executed numerous times can use a considerable amount of space in the shared pool. This could potentially lead to latch contention and a slowdown in performance.

FORCE: Regardless of whether there's a histogram, the optimizer will replace the literal values with a bind value and optimize this query as if it were in the following form:

select * from employees where job=:b

The optimizer uses a single plan for each SQL statement, regardless of the literal values. Thus, the execution plan won't be optimal, as the plan is generic, and not based on the literal values. If a query uses literal values, the optimizer will use those values to find the most efficient execution plan. If there are no literals in the SQL statement, it's very hard for the optimizer to figure out the best execution plan. By "peeking" at the value of the bind variables, the optimizer can get a better idea of the selectivity of the where clause condition—it is almost as if literals had been used in the SQL statement. The optimizer peeks at the bind values during the hard parse state. Since the execution plan is based on the specific value of the bind variable that the optimizer happened to peek at, the execution plan may not be optimal for all possible values of the bind variable.

In this example, the optimizer uses bind peeking based on the specific value of the JOB column it sees. In this case, the optimizer uses the value Clerk to estimate the cardinality for the query. When it executes the same statement (with a different value in the JOB column, say, Manager), the optimizer will use the same plan that it generated the first time (JOB=Clerk). Since there is only one parent cursor and just child cursors for the distinct statements, there's less pressure on the shared pool. Note that a child cursor uses far less space in the shared pool than a parent cursor. Often, setting the cursor_sharing parameter to FORCE immediately resolves serious latch contention in the database, making this one of the few magic bullets that can help you quickly reduce latch contention.

SIMILAR (without a histogram on the JOB column): The database will use literal replacement—it uses a system-generated bind value instead of the literal value for the JOB column (Clerk). This is because the absence of a histogram on the JOB column tells the optimizer that the data in the JOB column isn't skewed, and therefore the optimizer chooses the same plan for each execution of the statement, even though the literal values are different. The optimizer thinks it shouldn't make any changes to the execution plans for the statements that differ only in literal values because the data is uniformly distributed. The SIMILAR setting without a histogram on the columns in a query provides the same query performance and a similar impact on the shared pool as when you specify the FORCE setting.

SIMILAR (with a histogram on the JOB column): When the optimizer sees the histogram in the JOB column, it realizes that the column is skewed—this tells the optimizer that the results of the query may vary widely depending on the literal value of the JOB column. Consequently, the optimizer generates a different plan for each statement based on the literal values—thus the plans are very efficient, as in the case when you specify the EXACT setting. The SIMILAR option with a histogram in place does use more space in the shared pool, but not as much as when you use the EXACT setting. The reason for this is that each statement has its own child cursor instead of a parent cursor.

The choice among the various settings of the cursor_sharing parameter really boils down to an assessment of what's more critical to database performance: using the default EXACT setting or SIMILAR (with a histogram on the relevant column) does provide better query performance but leads to the generation of numerous parent cursors (EXACT setting) or child cursors (SIMILAR setting). If there's a severe pressure in the shared pool, and consequent latch contention, the entire database will perform poorly. Under these circumstances, you're better off implementing a system-wide solution by setting the cursor_sharing parameter to FORCE, as this guarantees that there's only a single child cursor for each SQL statement. If you're concerned about the impact of a single SQL statement, just drop the histogram on the relevant columns used in the SQL statement and set the cursor_sharing parameter to FORCE—this will ensure that the optimizer uses system-generated bind values for the column(s) and ensures that the SQL statement uses much less space in the shared pool.

Adaptive Cursor Sharing


In Oracle Database 11g, the database feature called adaptive cursor sharing enables a SQL statement with bind variables to use multiple execution plans, with each execution plan based on the values of the bind variable(s).Adaptive cursor sharing is enabled by default, and you can't disable it.

Adaptive cursor sharing is a new feature introduced in the Oracle Database 11g release. In earlier releases, DBAs often flushed the shared pool (and worse, sometimes restarted the database) when confronted with situations where the database apparently started using an inappropriate execution plan for a SQL statement, due to the bind peeking effect. In the 11g release, you don't have to do anything—the optimizer automatically changes execution plans when it encounters skewed data. With adaptive cursor sharing, the database uses multiple execution plans for a statement that uses bind variables, ensuring that the best execution plan is always used, depending on the value of the bind variable. Adaptive cursor sharing means that when different bind variable values indicate different amounts of data to be handled by the query, Oracle adapts its behavior by using different execution plans for the query instead of sticking to the same plan for all bind values. Since adaptive cursor sharing works only where literal values are replaced with binds, Oracle encourages you to use the FORCE setting for the cursor_sharing parameter. If you set the parameter to SIMILAR and you have a histogram on a column, the optimizer doesn't perform a literal replacement with bind variables, and thus adaptive cursor sharing won't take place. You must set the cursor_sharing parameter to FORCE for adaptive cursor sharing to work, thus letting the optimizer select the optimal execution plan for different values of the bind variable.

DB_FILE_MULTIBLOCK_READ_COUNT


DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans thereby reducing the time taken for the execution of a Query. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.

The parameter db_file_multiblock_read_count is only applicable for tables/indexes that are full scanned, but it also effects the SQL optimizer in its calculation of the cost of a full-table scan.

If DB_FILE_MULTIBLOCK_READ_COUNT is not set or is set to 0 (zero), then the optimizer uses a value of 8 for costing.

Before 10g R2, DBA's used the db_file_multiblock_read_count initialization parameter to tell Oracle how many block to retrieve in the single I/O operation.

HASH_AREA_SIZE


HASH_AREA_SIZE is relevant to parallel execution operations and to the query portion of DML or DDL statements. It specifies the maximum amount of memory, in bytes, to be used for hash joins.Oracle recommends that you enable automatic sizing of SQL working areas by setting PGA_AGGREGATE_TARGET instead.

The hash_area_size parameter value defaults to 1.5 times sort_area_size and is used for performing hash joins of Oracle tables. The higher the value for hash_area_size, the higher the propensity for the CBO to use a hash join.

SORT_AREA_SIZE


The sort_area_size parameters control the RAM size for dedicated tasks to sort SQL result sets and reduce expensive disk sorts. 

The sort_area_size is ignored when pga_aggregate_target is set and when workarea_size_policy =auto

SORT_AREA_SIZE is an upper bound on the memory we dynamically allocate for a sort

WORKAREA_SIZE_POLICY


WORKAREA_SIZE_POLICY specifies the policy for sizing work areas. This parameter controls the mode in which working areas are tuned.The WORKAREA_SIZE_POLICY parameter tell the server that it should take over PGA memory management. The PGA_AGGREGATE_TARGET parameter specifies the total amount of memory the server can allocate to the PGA.

•AUTO

Work areas used by memory-intensive operators are sized automatically, based on the PGA memory used by the system, the target PGA memory set in PGA_AGGREGATE_TARGET, and the requirement of each individual operator. You can specify AUTO only when PGA_AGGREGATE_TARGET is defined.

•MANUAL

The sizing of work areas is manual and based on the values of the *_AREA_SIZE parameter corresponding to the operation (for example, a sort uses SORT_AREA_SIZE). Specifying MANUAL may result in sub-optimal performance and poor PGA memory utilization.


QUERY_REWRITE_ENABLED and QUERY_REWRITE_INTEGRITY


QUERY_REWRITE_ENABLED allows you to enable or disable query rewriting globally for the database making use of the materilized views  and function based indexes while choosing the best optimized path.To take advantage of query rewrite for a particular materialized view, you must enable query rewrite for that materialized view, and you must enable cost-based optimization.

There are three acceptable values for query_rewrite_integrity:

enforced (default) - Presents materialized view with fresh data
trusted - Assumes that the materialized view is current
stale_tolerated - Presents materialized view with both stale and fresh data

STATISTICS_LEVEL


This is an important parameter for the collection of statistics on the Database Objects becasue the optimizer chooses the best execution path based on the statistics available on the Database objects as Oracle Optimizer is a cost based optimizer.

STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.

The default setting of TYPICAL ensures collection of all major statistics required for database self-management functionality and provides best overall performance. The default value should be adequate for most environments.

When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.

Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many of the important statistics required by Oracle Database features and functionality, including:

•Automatic Workload Repository (AWR) Snapshots

•Automatic Database Diagnostic Monitor (ADDM)

•All server-generated alerts

•Automatic SGA Memory Management

•Automatic optimizer statistics collection

•Object level statistics

•End to End Application Tracing (V$CLIENT_STATS)

•Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)

•Service level statistics

•Buffer cache advisory

•MTTR advisory

•Shared pool sizing advisory

•Segment level statistics

•PGA Target advisory

•Timed statistics

•Monitoring of statistics














No comments:

Post a Comment