Friday 8 June 2012

Answers for Oracle Interview Questions

101)Will Context Switching hinder the performance of an application?
SQL statements are called from PL/SQL in a loop, so the execution will switch back and forth between the PL/SQL engine and the SQL engine. This switch between two environments is known as a context switch. Context switches increase elapsed time of your programs and introduce unnecessary CPU overhead. You should reduce the number of context switches by eliminating or reducing the switching between these two environments.

102)How will you overcome Context Switching in Oracle?

By reducing the number of sql and plsql calls back and forth from the sql and plsql environments.

103)What is meant by BULK COLLECT?
One method of fetching data is an Oracle bulk collect. With Oracle bulk collect, the PL/SQL engine tells the SQL engine to collect many rows at once and place them in a collection.  During an Oracle bulk collect, the SQL engine retrieves all the rows and loads them into the collection and switches back to the PL/SQL engine.  When rows are retrieved using Oracle bulk collect, they are retrieved with only two context switches.  The larger the number of rows you would like to collect with Oracle bulk collect, the more performance improvement you will see using an Oracle bulk collect.

104)What is the use of the FORALL Clause in Oracle?
The FORALL syntax allows us to bind the contents of a collection to a single DML statement, allowing the DML to be run for each row in the collection without requiring a context switch each time.

105)What are the different types of collections available in Oracle?
NESTED TABLES
ASSOCIATIVE ARRAYS(INDEX BY TABLES)
VARRAYS

Please refer to the Oracle Documentation for more information on each of the collections.

106)Can you explain the collection types one by one along with their pitfalls?

Please refer to the Oracle Documentation for more information on each of the collections.

107)What are the different memory issues encountered when using a BULK COLLECT?
It is recommended you never issue a FETCH BULK COLLECT on a resultset without a LIMIT clause when you do not know ahead of time the size (or approximate size) of the resultset.You may be thinking that the database initialization parameter PGA_AGGREGATE_TARGET will insulate your system from such problems. This is not correct. This parameter only applies to memory allocations that the database can adjust internally as required, such as memory for sorting or hashing. If you ask for 100GB of PGA for PL/SQL collection memory, THE DATABASE WILL TRY TO HONOR THAT REQUEST, NO MATTER HOW MUCH TROUBLE THAT MAY CAUSE.

108)What is the use of a LIMIT Clause?
SGA memory is shared by all sessions connected to Oracle Database, but PGA memory is allocated for each session. Fortunately, PL/SQL makes it easy for developers to control the amount of memory used in a BULK COLLECT operation by using the LIMIT clause.

109)What is AWR?
The Oracle database uses AWR for problem detection and analysis as well as for self-tuning. A number of different statistics are collected by the AWR, including wait events, time model statistics, active session history statistics, various system- and session-level statistics, object usage statistics, and information on the most resource-intensive SQL statements.

The AWR is used to collect performance statistics including:

*    Wait events used to identify performance problems.
*    Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
*    Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
*    Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
*    Object usage statistics.
*    Resource intensive SQL statements.

110)What is ADDM?
For Oracle systems, the statistical data needed for accurate diagnosis of a problem is saved in the Automatic Workload Repository (AWR). The Automatic Database Diagnostic Monitor (ADDM) analyzes the AWR data on a regular basis, then locates the root causes of performance problems, provides recommendations for correcting any problems, and identifies non-problem areas of the system. Because AWR is a repository of historical performance data, ADDM can be used to analyze performance issues after the event, often saving time and resources reproducing a problem.

The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. The findings (problems) are listed in order of potential impact on database performance, along with recommendations to resolve the issue and the symptoms which lead to it's discovery.

An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database.The obvious place to start viewing ADDM reports is Enterprise Manager.

The types of problems that ADDM considers include the following:

*    CPU bottlenecks - Is the system CPU bound by Oracle or some other application?
*    Undersized Memory Structures - Are the Oracle memory structures, such as the SGA, PGA, and buffer cache, adequately sized?
*    I/O capacity issues - Is the I/O subsystem performing as expected?
*    High load SQL statements - Are there any SQL statements which are consuming excessive system resources?
*    High load PL/SQL execution and compilation, as well as high load Java usage
*    RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency issues?
*    Sub-optimal use of Oracle by the application - Are there problems with poor connection management, excessive parsing, or application level lock contention?
*    Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive checkpoints, or sub-optimal parameter settings?
*    Concurrency issues - Are there buffer busy problems?
*    Hot objects and top SQL for various problem areas

111)What is the use of ADDM Report and Recommendations?
The types of problems that ADDM considers include the following:

*    CPU bottlenecks - Is the system CPU bound by Oracle or some other application?
*    Undersized Memory Structures - Are the Oracle memory structures, such as the SGA, PGA, and buffer cache, adequately sized?
*    I/O capacity issues - Is the I/O subsystem performing as expected?
*    High load SQL statements - Are there any SQL statements which are consuming excessive system resources?
*    High load PL/SQL execution and compilation, as well as high load Java usage
*    RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency issues?
*    Sub-optimal use of Oracle by the application - Are there problems with poor connection management, excessive parsing, or application level lock contention?
*    Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive checkpoints, or sub-optimal parameter settings?
*    Concurrency issues - Are there buffer busy problems?
*    Hot objects and top SQL for various problem areas

A problem finding can be associated with a list of RECOMMENDATIONs for reducing the impact of the performance problem. Each recommendation has a benefit which is an estimate of the portion of DB time that can be saved if the recommendation is implemented. A list of recommendations can contain various alternatives for solving the same problem; you do not have to apply all the recommendations to solve a specific problem.

112)What is SQL Performance Analyzer?

Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

*    Database, operating system, or hardware upgrades.
*    Database, operating system, or hardware configuration changes.
*    Database initialization parameter changes.
*    Schema changes, such as adding indexes or materialized views.
*    Refreshing optimizer statistics.
*    Creating or changing SQL profiles.

113)When exactly do you go in for a SQL Performance Analyzer?

Whenever we want compare the performance of the statements in a tuning set before and after a database change whether major or minor, we can make use of the SQL Performance Analyzer.

114)What is ASH?

Retrieving ASH information is necessary if you need to get session information more current than you can retrieve from the AWR report.Again, AWR information is generated only hourly by default. ASH information is gathered every second from V$SESSION, and stores the most useful session information to help gauge database performance at any given moment.

You may need to view the DBA_HIST_ACTIVE_SESS_HISTORY historical view in order to get the ASH information you need if your database is very active.The MMON background process, which manages the AWR hourly snapshots, also flushes ASH information to the historical view at the same time.If there is heavy activity on the database, and the buffer fills between the hourly AWR snapshots, the MMNL background process will wake up and flush the ASH data to the historical view.

115)What are the different Views that can be Queried for getting info on ASH?
ASH information is gathered every second from V$SESSION, and stores the most useful session information to help gauge database performance at any given moment.
You may need to view the DBA_HIST_ACTIVE_SESS_HISTORY historical view in order to get the ASH information you need if your database is very active.

116)What is meant by Latch?

Latches are lightweight serialization devices used to coordinate multiuser access to shared data structures, objects, and files.
Latches are locks designed to be held for extremely short periods of time¿for example, the time it takes to modify an in-memory data structure. They are used to protect certain memory structures, such as the database block buffer cache or the library cache in the shared pool. Latches are typically requested internally in a 'willing to wait' mode. This means that if the latch is not available, the requesting session will sleep for a short period of time and retry the operation later. Other latches may be requested in an 'immediate' mode, which is similar in concept to a SELECT FOR UPDATE NOWAIT, meaning that the process will go do something else, such as try to grab an equivalent sibling latch that may be free, rather than sit and wait for this latch to become available. Since many requestors may be waiting for a latch at the same time, you may see some processes waiting longer than others. Latches are assigned rather randomly, based on the luck of the draw, if you will. Whichever session asks for a latch right after it was released will get it. There is no line of latch waiters¿just a mob of waiters constantly retrying.
Oracle uses atomic instructions like 'test and set' and 'compare and swap' for operating on latches. Since the instructions to set and free latches are atomic, the operating system itself guarantees that only one process gets to test and set the latch even though many processes may be going for it simultaneously. Since the instruction is only one instruction, it can be quite fast. Latches are held for short periods of time and provide a mechanism for cleanup in case a latch holder 'dies' abnormally while holding it. This cleanup process would be performed by PMON.

Latch Spinning

Waiting for a latch can be an expensive operation. If the latch is not available immediately and we are willing to wait for it, as we likely are most of the time, then on a multi-CPU machine our session will spin trying over and over, in a loop, to get the latch. The reasoning behind this is that context switching (i.e., getting 'kicked off' the CPU and having to get back on the CPU) is expensive. So, if the process cannot get a latch immediately, we¿ll stay on the CPU and try again immediately rather than just going to sleep, giving up the CPU, and trying later when we¿ll have to get scheduled back on the CPU. The hope is that the holder of the latch is busy processing on the other CPU (and since latches are designed to be held for very short periods of time, this is likely) and will give it up soon. If after spinning and constantly trying to get the latch, we still fail to obtain it, only then will our process sleep, or take itself off of the CPU, and let some other work take place.

117)How do you overcome Latches in Oracle?
We can overcome latch contention by setting the DB_BLOCK_LRU_LATCHES parameter, increasing the number of latches to service when needed thus decreasing the latch contention.


118)What is the use of Database Replay?
You can use Database Replay to capture a workload on the production system and replay it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. Database Replay supports workload capture on a system running Oracle Database 10g Release 2 and newer releases. In order to capture a workload on a system running Oracle Database 10g Release 2, the database version can be 10.2.0.4 or higher. Workload replay is only supported on systems running Oracle Database 11g Release 1 and newer releases.


119)How do you accept a Profile Recommended by ADDM?
To configure automatic SQL tuning, run the SET_TUNING_TASK_PARAMETER procedure in the DBMS_SQLTUNE package:
BEGIN
  DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER(
    task_name => 'SYS_AUTO_SQL_TUNING_TASK',
    parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE');
END;
/
In this example, the automatic SQL tuning task is configured to automatically accept SQL profiles recommended by the SQL Tuning Advisor.

120)Any idea on the Database Bufffer Cache?
The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All user processes concurrently connected to the instance share access to the database buffer cache.

The database buffer cache and the shared SQL cache are logically segmented into multiple sets. This organization into multiple sets reduces contention on multiprocessor systems.

Oracle Database uses the buffer cache to store blocks read from disk. ORACLE DATABASE BYPASSES THE BUFFER CACHE FOR PARTICULAR OPERATIONS, SUCH AS SORTING AND PARALLEL READS.

The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed.

When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list.

The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.

Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.

If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.

If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk.

121)Can you throw some light on the Data Dictionary and the Library Cache?
The library cache includes the shared SQL areas, private SQL areas (in the case of a multiple transaction server), PL/SQL procedures and packages, and control structures such as locks and library cache handles.
Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool within the SGA.

The data dictionary is a collection of database tables and views containing reference information about the database, its structures, and its users. Oracle accesses the data dictionary frequently during SQL statement parsing. This access is essential to the continuing operation of Oracle.

The data dictionary is accessed so often by Oracle that two special locations in memory are designated to hold dictionary data. One area is called the data dictionary cache, also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data). The other area in memory to hold dictionary data is the library cache. All Oracle user processes share these two caches for access to data dictionary information.

122)How do you overcome Data Dictionary and Library Cache Contention?
Whenever new statements are added to the library cache area, this type of latch must be acquired by Oracle as part of the operation. Oracle scans the library cache area during the parse phase for matching SQL statements. If one is not found, then Oracle will complete the task, obtain the library cache latchfor the SQL statement and then insert it into the library cache area for future usage by the Oracle database.

Contention due to the library cache latches as well as shared pool latches is usually due to applications not using bind variables.You can set the CURSOR_SHARING parameter to force Oracle to use bind variables, even if your application hasn't specified them in the code. You can choose between a setting of FORCE or SIMILAR for this parameter to force the substituting of bind variables for hard-coded values of variables. The default setting for this parameter is EXACT, which means that the database won't substitute bind variables for literal values. When you set the CURSOR_SHARING parameter to FORCE, Oracle converts all literals to bind variables. The SIMILAR setting causes a statement to use bind variables only if doing so doesn't change a statement's execution plan. Thus, the SIMILAR setting seems a safer way to go about forcing the database to use bind variables instead of literals.

The library cache contention usually disappears once you set the CURSOR_SHARING parameter to FORCE or to SIMILAR.

There are two types of latches: “willing to wait” latches (example is a library cache latch) and “not willing to wait” latches (an example is a redo copy latch). A process that is willing to wait will try to acquire a latch. If none are available, it will spin and then request the latch again. It will continue to do this up to the _SPIN_COUNT initialization parameter (note that spinning costs CPU). If it can’t get a latch after spinning up to the _SPIN_COUNT, it will go to sleep, not do anything for a while, and then will wake up after one centisecond (one hundredth of a second). It will do this twice. It will then start this process again, spinning up to the _SPIN_ COUNT and then sleeping for twice as long (two centiseconds). After doing this again, it will double again. So the pattern is 1, 1, 2, 2, 4, 4, etc. It will do this until it gets the latch. Every time the latch sleeps, it will create a latch sleep wait. An example of a “willing to wait” latch is a library cache latch.

Data Dictionary Cache Contention

The "row cache objects" is a serialization latch that protects the access to the data dictionary cache in the SGA. This latch is acquired whenever you are referencing metadata objects in the data dictionary cache.

The Row Cache Objects Latch is a correlated latch to the shared pool latch is the row cache objects latch. The purpose of this latch is to provide data protection for user processes that access the cached data dictionary area within Oracle.

The row cache object latch can also indicate SQL with excessive hard parsing or excessive reliance on data dictionary information such as views row-level security, synonyms, etc.  The general solution for row cache latch waits is to increase shared_pool_size. 

No comments:

Post a Comment