Friday 8 June 2012

Answers for Oracle Interview Questions

177)Whats the difference between a scalar sub Query and an Inline View?
A subquery in the FROM clause is called an inline view and this sub Query may return a number of columns and rows.A subquery in the WHERE clause is called a nested subquery.

A Scalar Sub Query is a subquery in a SQL statement that returns exactly one column and zero rows or one row. That single column can be a complex object type, so it can consist of many attributes, but the subquery returns a single scalar value (or NULL if the subquery returns zero records). A scalar subquery can be used anywhere a literal could have been used.

178)Can we lock statistics on a Table?
Yes, using the LOCK_STATS Procedure of the DBMS_STATS Package.

179)Will the use of a DB Link hinder the performance of a Query?
The Use of DB Links cause a performance degradation becasue of the Network Lag in pushing the huge amount of data from the remote site to the Local site. So the performance of the Query depends upon the activity which is carried out on the data and also on the amount of data which is carried out on the remote site.

The driving site hint forces query execution to be done at a different site than the initiating instance. If your SQL performs a sort, be aware that the sort will be performed on the LOCAL database.In distributed databases when a table has been partitioned into separate instances, a parallel query can be invoked to read the remote tables simultaneously.

182)Any idea on the OPTIMIZER_DYNAMIC_SAMPLING Parameter?
The OPTIMIZER_DYNAMIC_SAMPLING initialization parameter controls the number of blocks read by the dynamic sampling query. The parameter can be set to a value from 0 to 10.

Optimizer dynamic sampling refers to the ability of the SQL optimizer to take a sample of rows from a table to calculate missing statistics.

Some of the sampling levels that can be used;

level 0 - do not use dynamic sampling

level 1 - Sample all tables that have not been analyzed if the following criteria are met: (1) there is at least 1 unanalyzed table in the query; (2) this unanalyzed table is joined to another table or appears in a subquery or non-mergeable view; (3) this unanalyzed table has no indexes; (4) this unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).

level 2 (default)- Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is two times the default number of dynamic sampling blocks.

level 4 - Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.

levels 5, 6, 7, 8, and 9 - Apply dynamic sampling to all tables that meet the previous level criteria using 2, 4, 8, 32, or 128 times the default number of dynamic sampling blocks respectively.

level 10 - sample all blocks in the table

If you are using the default (which is 2) for OPTIMIZER_DYNAMIC_SAMPLING then dynamic sampling is only used for unanalyzed tables.
Setting it to 0 (off) will result in a default value of stats for unanalyzed tables.

Since Dynamic Sampling is used to analyze the un analyzed Tables and therefore facilitates the optimizer in getting the optimized execution plan based on the statistics availble on the tables. Since 9i, the Oracle optimizer is a cost based optimizer and therfore relys on the statistics of the table in finding the optimized Execution plan.

183)What does cost refer to in a cost based optimizer?
COST - The cost of the operation as estimated by the optimizer’s cost-based approach. The value of this column does not have any particular unit of measurement; it is merely a weight value used to compare costs of execution plans.

BYTES - The cost-based approach’s estimate of the number of bytes accessed by the operation.

 Tip - The BYTES column is extremely important when evaluating how to tune a query. When an index is used and the number of bytes is great, it implies that doing a full table scan would perhaps be more efficient (i.e., reading the index and data is more costly than just reading the data in a full table scan). Also, the number of bytes helps us to determine which table should be accessed first in the query (driving table), because one table may limit the number of bytes needed from another. See Chapter 9 for tips on choosing the driving table.

 Tip - Remember that both the COST and BYTES values in a query are estimates; it is quite possible for a version of a query with a higher estimated cost or bytes to run faster than another with a lower value.

184)What do you mean by stored outlines in Oracle?
Using Stored Outlines
=====================
Up until recently, the chief use for execution plans was to determine what Oracle was doing with queries at run time as a tuning tool. A fairly new facility called STORED OUTLINES allows a query to use a predetermined execution plan every time that query is run, no matter where the query is run from. People sometimes speak of the STORED OUTLINES as storing an execution plan, but this is not really what happens. Instead, Oracle stores a series of hints—instructions to the database to execute a query in a precise way—to duplicate the execution plan as saved during a recording session.

The following privileges are required to use STORED OUTLINES:
CREATE ANY OUTLINE
EXECUTE_CATALOG (to use the DBMS_OUTLN package)
PLUSTRACE (to use AUTOTRACE, if applicable)

You should create the outline tables in your own schema (which is a good idea); if you don’t, the outlines will be stored in the SYSTEM tablespace (which is almost always a mistake). The outline tables can be created in the current schema by running the DBMS_OUTLN_EDIT.CREATE_EDIT_TABLES procedure with the following command:
exec dbms_outln_edit.create_edit_tables

There are two kinds of stored outlines: private, which are session-specific, and public, which can affect the entire database.
Which kind is being used is controlled by the USE_PRIVATE_ OUTLINES session parameter setting. If this setting is TRUE, then private outlines are used.

185)What are the caveats of Bitmap join Indexes?
You cannot specify REBUILD ONLINE for bitmap indexes or for indexes that enforce referential integrity constraints.

186)What do you mean by pipelined table functions?
Pipelined table functions are functions that produce a collection of rows (such as a nested table) that can be queried like a physical database table or assigned to a PL/SQL collection variable. You can use a table function in place of the name of a database table in the FROM clause of a query or in place of a column name in the SELECT list of a query.

187)What is Consistent gets in Oracle?
The number of blocks read from the buffer cache for queries without the SELECT FOR UPDATE clause. The value for this statistic plus the value of the “db block gets” statistic constitute what is referred to as logical reads (all reads cached in memory). These are usually the CURRENT version of the block, but it can also be a Consistent Read (CR) version.

188)What is the Sequence Caching in Oracle?
The CACHE clause for a sequence determines the number of sequence values the database must cache in the SGA.

189)Will Data type conversion hinder the performance of a Query?
Yes, It acts as a overhead to the oracle optimizer to convert the values used in the WHERE Condition.

190)What are the different values for the STATISTICS_LEVEL parameter?
BASIC
TYPICAL
ALL

191)Any idea on the DB_FILE_MULTIBLOCK_READ_COUNT parameter?
DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans thereby reducing the time taken for the execution of a Query. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.

The parameter db_file_multiblock_read_count is only applicable for tables/indexes that are full scanned, but it also effects the SQL optimizer in its calculation of the cost of a full-table scan.

If DB_FILE_MULTIBLOCK_READ_COUNT is not set or is set to 0 (zero), then the optimizer uses a value of 8 for costing.

Before 10g R2, DBA's used the db_file_multiblock_read_count initialization parameter to tell Oracle how many block to retrieve in the single I/O operation.

192)When does Oracle go in for a FULL Table scan?
When the Oracle optimizer feels that a Full Table scan is less costly to making use of the indexes to retrieve the data and also when there are no indexes available on the table which forces the optimizer to go in for a full table scan. Also Forcing the Optimizer to go in for a FULL Table Scan when there is a Hint applied to the Query





Answers for Oracle Interview Questions

151)What is meant by Row Migration?
We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently).  A migration means that the entire row will move and we just leave behind the "forwarding address". So, the original block just has the rowid of the new block and the entire row is moved.

When we FULL SCAN a table, we actually *ignore* the forwarding addresses (the head rowpiece we call it for a row).  We know that as we continue the full scan, we'll eventually get to that row so we can ignore the forwarding address and just process the row when we get there.  Hence, in a full scan migrated rows don't cause us to really do any extra work -- they are meaningless.  Oh sure, the forwarding address is consuming a couple of bytes on the block -- it is overhead -- but frankly, it is meaningless.

When we INDEX READ into a table -- then a migrated row will cause additional IO's.  That is because the index will tell us "goto file X, block Y, slot Z to find this row".  But when we get there we find a message that says "well, really goto file A, block B, slot C to find this row".  We have to do another IO (logical or physical) to find the row.

152)When does a row get chained and migrated?
When a single row is stored in two or more blocks, this is called row chaining.

We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently).  A migration means that the entire row will move and we just leave behind the "forwarding address". So, the original block just has the rowid of the new block and the entire row is moved.

153)How do you verify whether there the rows are chained or migrated in a table?
select * from chained_rows  gives information on the chained rows. Even DBA_Tables gives information on chained_cnt.

154)What is meant by Paging?
When paging occurs, information that is not currently being used is moved from memory to disk. This allows memory to be used by a process that currently needs it. If paging happens a lot, the system will experience decreases in performance, causing processes to take longer to run.

155)What is meant by swapping?
When swapping occurs, an active process is moved from memory to disk temporarily so that another active process that also desires memory can run. Swapping is based on system cycle time. If swapping happens a lot, your system is dead. Depending on the amount of memory available, an SGA that is too large can cause swapping.

156)When does this paging and swapping occur?
During Sorting operation when the Sort involves Disk Sorting.

157)How do you overcome Paging and Swapping?
By Increasing the SORT_AREA_SIZE for the during of the session which involves the sort operation.

158)Will a "!" negate the use of Indexes?
USE OF "NOT" CLAUSE WILL NEGATE THE USE OF BTREE INDEXES
========================================================

You can effectively use the NOT clause several ways:

*    Comparison operators ('<>', '!=', '^=')
*    NOT IN
*    NOT LIKE

159)What happens on the backend when we do a COMMIT statement?
When USER issues a COMMIT Statement, all the redo log buffer contents are written to the Redo log files.

160)How Often do you COMMIT in your application?
Reduce the Frequency of COMMITS in your application to increase the performance of the application.

161)Will the frequency of COMMIT hinder the performance of an application?
Since a COMMIT Statement issues a Disk I/O on the backend , Increase in the frequency of COMMIT's will decrease the performance of the application

162)Why Should we need to index a foreign Key?
Indexing a Foreign Key is necessary becasue a foreign key would have been created with the CASCADE clause, Which states that whenever a parent key column is deleted the corresponding child entries in the child table would be deleted or set to NULL depending on the CASCADE Clause. Since this delete or SET NULL Operation requires scanning the child table for finding the child entries, indexing the foreign key column will help in index searching the child entries instead of doing a full table scan during the CASCADE Operation thereby reducing the time required to complete the operation.

163)How do you identify whether Indexes are used by the application?
By Applying the MONITORING USAGE Clause in the ALTER INDEX Statement.

164)Whats the use of the QUERY REWRITE in materialized views in Oracle?
The QUERY REWRITE clause lets you specify whether the materialized view is eligible to be used for query rewrite.ENABLE Clause Specify ENABLE to enable the materialized view for query rewrite.

Enabling of query rewrite is subject to the following restrictions:

•You can enable query rewrite only if all user-defined functions in the materialized view are DETERMINISTIC.
•You can enable query rewrite only if expressions in the statement are repeatable. For example, you cannot include CURRENT_TIME or USER, sequence values (such as the CURRVAL or NEXTVAL pseudocolumns), or the SAMPLE clause (which may sample different rows as the contents of the materialized view change).

165)Any Idea about Histograms?
Histograms may help the Oracle optimizer in deciding whether to use an index vs. a full-table scan (where index values are skewed) or help the optimizer determine the fastest table join order.  For determining the best table join order, the WHERE clause of the query can be inspected along with the execution plan for the original query.  If the cardinality of the table is too-high, then histograms on the most selective column in the WHERE clause will tip-off the optimizer and change the table join order.

If your database exclusively uses bind variables, Oracle recommends deleting any existing Oracle histograms and disabling Oracle histogram generation (method opt) for any future dbms_stats analysis.  This approach will use the number if distinct values to determine the selectivity of a column.

Most Oracle experts only recommend scheduled re-analysis for highly dynamic databases, and most shops save one very-deep sample (with histograms), storing the statistic with the dbms_stats.export_schema_stats procedure. The only exceptions are highly-volatile systems (i.e. lab research systems) where a table is huge one-day and small the next.

For periodic re-analysis, many shops us the table "monitoring" option and also method_opt "auto" after they are confident that all histograms are in-place.

Oracle histograms statistics can be created when you have a highly skewed index, where some values have a disproportional number of rows. In the real world, this is quite rare, and one of the most common mistakes with the CBO is the unnecessary introduction of histograms in the CBO statistics. As a general rule, histograms are used when a column's values warrant a change to the execution plan.

166)How Does Collecting Histograms influence the performance of an Query?
Histograms may help the Oracle optimizer in deciding whether to use an index vs. a full-table scan (where index values are skewed) or help the optimizer determine the fastest table join order.  For determining the best table join order, the WHERE clause of the query can be inspected along with the execution plan for the original query.  If the cardinality of the table is too-high, then histograms on the most selective column in the WHERE clause will tip-off the optimizer and change the table join order.

167)Whats the use of the KEEP and RECYCLE Buffer Pools in Oracle?
DB_KEEP_CACHE_SIZE
==================

db_cache_size + db_keep_cache_size + db_recycle_cache_size = totally buffer cache size.

This parameter states the size which is required for pinning the objects to the shared pool so that they are not aged out of memory. Small frequently used look up tables can be pinned into the memory so that cost incurred in making a physical read frequently is reduced because of the data accessed from the cache.


DB_RECYCLE_CACHE_SIZE
=====================

DB_RECYCLE_CACHE_SIZE specifies the size of the RECYCLE buffer pool. This pool is reserved for table blocks of large tables on which full table scans are performed.

168)What is the HIGH WATER MARK LEVEL w.r.t. Oracle?
The high water mark level is just a line separate the used blocks and free blocks.

The blocks above the HWM level is free blocks, they are ready to use.
The blocks below the HWM level is used blocks, they are already used.

169)Whats the use of the Bitmap Join Indexes?
The use of bitmap join indexes is restricted; you can only index the columns in the dimension tables.The columns used for the join must be primary key or unique constraints in the dimension tables, and if it is a composite primary key, you must use each of the columns in your join.
You cannot specify REBUILD ONLINE for bitmap indexes or for indexes that enforce referential integrity constraints.

170)What are prefixed and non prefixed Indexes?

Prefixed Indexes
================

Prefixed indexes are indexes that contain keys from the partitioning key as the leading edge of the index.

Non-Prefixed Indexes
====================

Non-prefixed indexes are indexes that do not have the leading column of the partitioning key as the leading column of the index.

171)How do you shrink Un used Space?
You can shrink the Un used space of the table using the SHRINK SPACE Clause in the ALTER TABLE Statement

You can instruct Oracle to not re-adjust the high-water mark when shrinking a table. This is done via the COMPACT clause—for example:
SQL> alter table inv shrink space compact;

172)When do you think that a Index needs to be rebuild?
Rebuilding an index has a key advantage over re-creating an index from scratch, as well as a key disadvantage. The advantage of rebuilding an index is that the existing index is in place until the rebuild operation is complete, so it can therefore be used by queries that are run concurrently with the rebuild process. The main disadvantage of the index rebuild process is that you will need space for both indexes, which is required during the rebuild process. Some of the key reasons to rebuild an index include the following:

*    Rebuilding an index that has become fragmented over time
*    Rebuilding an index after a large, direct-path load of data
*    You want to move an index to a different tablespace
*    The index is in an unusable state due to a partition-level operation on the associated table.

173)Any idea on Compressed Indexes in Oracle?
This is really an option to a normal b-tree index.  It results in fewer leaf nodes, so less overall I/O and less to cache.  All of this means Oracle's optimizer will be just a little more likely to use these than a normal uncompressed index.  There is a cost to all of this, in CPU to uncompress when you access these.  Also, in reading about how the optimizer uses these, and choosing the proper compression level, it begins to look daunting.  Your mileage may vary.

174)What is Virtual Column in Oracle 11G?
It is said that creating a Virtual Column and Indexing the Virtual Column performs much better than a Fucntional Index on a Table Column however it all depends on the data which is stored in the table. Therefore it is always necessary to test it before replacing any Functional Index with a Virtual Column. Also Virtual column concept can applied only for Oracle Releases after 11G.A virtual column is not free. If you have an existing table, you have to create and maintain the DDL required to create the virtual column, whereas a function-based index can be added, modified, and dropped independently from the table.

Several caveats are associated with virtual columns:

*    You can define a virtual column only on a regular heap-organized table. You can't define a virtual column on an index-organized table, an external table, a temporary table, object tables, or cluster tables.
*    Virtual columns can't reference other virtual columns.
*    Virtual columns can reference columns only from the table in which the virtual column is defined.
*    The output of a virtual column must be a scalar value (a single value, not a set of values).

175)What are the benefits of using Virtual Column in terms of indexing and partitioning?
It is said that creating a Virtual Column and Indexing the Virtual Column performs much better than a Fucntional Index on a Table Column however it all depends on the data which is stored in the table. Therefore it is always necessary to test it before replacing any Functional Index with a Virtual Column.

Answers for Oracle Interview Questions

146)What are the parameters which influence the sorting of Query Results?
SORT_AREA_SIZE
Temporary Tablespace and its size

147)Why and When Sorting Hinders the performance of an application?
Avoid Sorting


Disk Sorting Excessive I/O which severs as the main purpose for Performance Hindrance when i do a sort via the Order by clause and which is not able to fit inside the sort_Area_size allocated in the PGA.Sorting starts in Oracle PGA RAM (defined by the limits of sort_area_size and pga_aggregate_target 5% session limit), and a "disk sort" is invoked when the sort exceeds the maximum PGA allowed for the session.

A disk sort can far outweigh the unwanted effects of regular disk I/O read activity because a disk sort involves both physical reads and physical writes. First, Oracle must perform physical writes to a session's temporary tablespace for the sort activity it cannot handle in memory. Then, the database must turn right around and read that information back from disk to build the result set requested by the SQL query. So in essence, it's a double whammy especially for large result sets that are the product of heavy sort requests.

Oracle states that "The new sort algorithm shows more performance improvement for in-memory sorts.".Oracle10gRw introduced a new sort algorithm which is using less memory and CPU resources.For sorting in Oracle 10g release 2 there is a hidden parameter called "_newsort_enabled" that turns-on the new sorting algorithm.
We also see that in-memory sorts are CPU-intensive, and that faster processors will also improve in-memory sort performance.

Disk operations are about 10,000 times slower than a row's access in the data buffers.

If the clustering factor-an integer-roughly matches the number of blocks in the table, your table is in sequence with the index order. However, if the clustering factor is close to the number of rows in the table, it indicates that the rows in the table are out of sequence with the index.


In large active tables with a large number of index scans, row resequencing can triple the performance of queries.

Also, excessive disk sorting will cause a high value for free buffer waits, paging other tasks' data blocks out of the buffer.

Each sort in a query can consume memory up to the amount specified by SORT_AREA_SIZE, and there can be multiple sorts in a query. Also, if a query is executed in parallel, each PQ slave can consume memory up to the amount specified by SORT_AREA_SIZE for each sort it does.


SORT_AREA_SIZE is also used for inserts and updates to bitmap indexes. Setting this value appropriately results in a bitmap segment being updated only once for each DML operation, even if more than one row in that segment changes.Larger values of SORT_AREA_SIZE permit more sorts to be performed in memory. If more space is required to complete the sort than will fit into the memory provided, then temporary segments on disk are used to hold the intermediate sort runs.

SORT_AREA_SIZEs are not allocated until they are needed (and GROW to be sort_area_size bytes in size -- they don't start there) and are freed the second the sort is over.

Oracle will always use the cheapest method for sequencing a result set, and the optimizer will use index retrieval (extracting the rows in sorted order) if it consumes fewer resources than a back-end sort. Remember, a sort that cannot fit into RAM will have to be done in the TEMP tablespace, very slow with lots of disk I/O.

As a general rule, the sort_area_size should be large enough that only index creation and ORDER BY clauses using functions should be allowed to use a disk sort.

A Query on the V$sysstat gives info on the sorts performed both in memory and disk sorts and also the number of rows that was sorted.

In general, Oracle sorting occurs under the following circumstances:

*    SQL using the ORDER BY clause
*    SQL using the GROUP BY clause
*    When an index is created
*    When a MERGE SORT is invoked by the SQL optimizer because inadequate indexes exist for a table join

IN MEMORY , ONE PASS AND MULTI PASS SORT
========================================

Explaination by Tom Kyte

In memory sort Everything fits in memory, no disk is needed, everything done in ram.

one pass sort We read some amount of data and sort it, the sort workarea fills up, we write that to disk. We do it again (read more, sort it) - this too spills to disk. We do this over and over until we've read the entire set of data and sorted each of the chunks. Now we need read a bit of each of the chunks we've sorted on disk and start returning data from there. In the one pass sort, we can read a bit of each chunk and start returning data. If we cannot read a bit of EVERY chunk then....

we are in the realm of the multi-pass sort. We read and merge the first few chunks - resulting in a new set. We read and merge another set of chunks - resulting in a new set. And so on - until we've processed all of the output from the first pass and created a "second pass" of more merged (more sorted) data. Then we merge those in the sort area and start returning data.

Please do have a look at this article
=====================================

http://www.nocoug.org/download/2003-08/how_cbo_works.ppt

148)Whats the use of temporary Tablespace w.r.t. sorting?
Disk Sorting Excessive I/O which severs as the main purpose for Performance Hindrance when i do a sort via the Order by clause and which is not able to fit inside the sort_Area_size allocated in the PGA.Sorting starts in Oracle PGA RAM (defined by the limits of sort_area_size and pga_aggregate_target 5% session limit), and a "disk sort" is invoked when the sort exceeds the maximum PGA allowed for the session.

A disk sort can far outweigh the unwanted effects of regular disk I/O read activity because a disk sort involves both physical reads and physical writes. First, Oracle must perform physical writes to a session's temporary tablespace for the sort activity it cannot handle in memory. Then, the database must turn right around and read that information back from disk to build the result set requested by the SQL query. So in essence, it's a double whammy especially for large result sets that are the product of heavy sort requests.

149)What are the different operations which involve disk I/O?
In Addition to SORTING ,Other SQL operations that might require disk sorting are:
*    ANALYZE
*    Select DISTINCT
*    UNION  causes a SORT to eliminate the duplicates whereas UNIONALL Does not
*    INTERSECT
*    MINUS
*    Sort-Merge joins.

150)What is meant by Row chaining?
When a single row is stored in two or more blocks, this is called row chaining. This can cause potential performance issues because Oracle will have to retrieve data from multiple blocks (instead of one) when retrieving a chained row.

One of the methods of removing Row Chaining which hinders Performance is by Use of the MOVE Command
===================================================================================================

One method for resolving the row chaining within a table is to use the MOVE statement. When you move a table, Oracle requires an exclusive lock on the table; therefore you should perform this operation when there are no active transactions associated with the table being moved.

Also, as part of a MOVE operation, all of the rows are assigned a new ROWID. This will invalidate any indexes that are associated with the table. Therefore, as part of the move operation, you should rebuild all indexes associated with the table being moved. This example moves the EMP table:

SQL> alter table <Table_name> move

Answers for Oracle Interview Questions

123)What is the basic Structure of a PLSQL Block?
Declare
Begin
Exception
End

124)What are the different types of PLSQL Objects that you have come across?
Stored Procedures
Stored Functions
Packages

125)What is the use of a Package?
A package is a schema object that groups logically related PL/SQL types, items, and subprograms. Packages usually have two parts, a specification and a body, although sometimes the body is unnecessary. The specification (spec for short) is the interface to your applications; it declares the types, variables, constants, exceptions, cursors, and subprograms available for use. The body fully defines cursors and subprograms, and so implements the spec.

126)When should you go in for a Package?
Packages offer several advantages: modularity, easier application design, information hiding, added functionality, and better performance.

Modularity
Packages let you encapsulate logically related types, items, and subprograms in a named PL/SQL module. Each package is easy to understand, and the interfaces between packages are simple, clear, and well defined. This aids application development.

Easier Application Design
When designing an application, all you need initially is the interface information in the package specs. You can code and compile a spec without its body. Then, stored subprograms that reference the package can be compiled as well. You need not define the package bodies fully until you are ready to complete the application.

Information Hiding
With packages, you can specify which types, items, and subprograms are public (visible and accessible) or private (hidden and inaccessible). For example, if a package contains four subprograms, three might be public and one private. The package hides the implementation of the private subprogram so that only the package (not your application) is affected if the implementation changes. This simplifies maintenance and enhancement. Also, by hiding implementation details from users, you protect the integrity of the package.

Added Functionality
Packaged public variables and cursors persist for the duration of a session. So, they can be shared by all subprograms that execute in the environment. Also, they allow you to maintain data across transactions without having to store it in the database.

Better Performance
When you call a packaged subprogram for the first time, the whole package is loaded into memory. So, later calls to related subprograms in the package require no disk I/O. Also, packages stop cascading dependencies and thereby avoid unnecessary recompiling. For example, if you change the implementation of a packaged function, Oracle need not recompile the calling subprograms because they do not depend on the package body.

127)What is the difference between a Procedure and a Function?
A Procedure can have out values but a Function cannot.
A Fucntion can return a value whereas a procedure cannot.
A User defined Function can be used in a SQL Statement whereas a procedure cannot.

128)Can a Function return more than one value?
No

129)Can a Function have OUT Parameters?
No

130)What is meant by a Fucntion Based Index?
A function-based index allows you to match any WHERE clause in an SQL statement and remove unnecessary large-table full-table scans with super-fast index range scans.

You can't modify a column that has a function-based index applied to it. You'll have to drop the index, modify the column, and then re-create the index.
This is because the Functional Index is created by making use of a SQL Function and therefore there is a possibility that some of the SQL Funcions dont work based on the data stored in the Columns on which the functional index is created.

Any user-created SQL functions must be declared deterministic before they can be used in a function-based index. Deterministic means that for a given set of inputs, the function always returns the same results. You must use the keyword DETERMINISTIC when creating a user-defined function that you want to use in a function-based index

131)What is benefit of using Functions?
The use of the RESULT_CACHE parameter while declaring a function helps in caching the results of the user defined functions for the input values. The Function should be declared determinstic for caching the results of the function and making use of a functional index on the user defined function.

132)Can we create an index on an user defined Fucntions?
Yes, the user defined function needs to be declared determinstic.

133)What is the use of the RESULT_CACHE Keyword in Functions?
This parameter is applicable to the user defined functions in oracle for caching the results of the function along with the input parameters.

134)What do you mean by a global Parameter?
Any Parameter which is defined in the package specification instead of defining them within the procedure or function while defining the body of the package are defined as global variables.

135)Can a Global parameter be referenced by some other packages or PLSQL?
Yes, by refering the parameter using the pacakage name in which the parameter is defined.

136)What is the use of an UNDO Tablespace?
Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.

Undo records are used to:
*    Roll back transactions when a ROLLBACK statement is issued
*    Recover the database
*    Provide read consistency
*    Analyze data as of an earlier point in time by using Oracle Flashback Query
*    Recover from logical corruptions using Oracle Flashback features

137)What is meant by FLASHBACK?
Oracle Flashback Technology is a group of Oracle Database features that that let you view past states of database objects or to return database objects to a previous state without using point-in-time media recovery.
With flashback features, you can do the following:
*    Perform queries that return past data
*    Perform queries that return metadata that shows a detailed history of changes to the database
*    Recover tables or rows to a previous point in time
*    Automatically track and archive transactional data changes
*    Roll back a transaction and its dependent transactions while the database remains online

138)How do you flashback a Query?
Using the "AS OF TIMESTAMP" clause, you will be able to flashback a Query.
Eg:-
SELECT * FROM <Table_name>
  AS OF TIMESTAMP
   TO_TIMESTAMP('2012-06-06 00:00:00', 'YYYY-MM-DD HH:MI:SS')
  
139)Can we flashback a Table and database?
You can flashback a Table which is dropped to before drop using the "FLASHBACK TABLE TO BEFORE DROP".

You can flashback the Database using the "TO TIMESTAMP" Clause after shutting the Database and opening it in the MOUNT Mode.
Eg:-
SHUTDOWN DATABASE
STARTUP MOUNT
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-2;

140)What is Snapshot too Old Error?
While setting a high value for the UNDO_RETENTION parameter can potentially minimize the possibility of receiving "snapshot too old" error, it doesn't guarantee that the database won't overwrite older undo data that may be needed by a running transaction. You can move long-running batch jobs to a separate time interval when other programs aren't running in the database, to avoid these errors.

Regardless, while you can minimize the occurrence of "snapshot too old" errors with these approaches, you can't completely eliminate such errors without specifying the guaranteed undo retention feature. Oracle will keep new DML statements from executing when you set up guaranteed undo retention.

The "guarantee" part of the undo retention guarantee is real—Oracle will certainly retain undo at least for the time you specify and will never overwrite any of the unexpired undo extents that contain the undo required to satisfy the undo retention period.

141)What is the main use of a Global Temporary Table?
Applications often use some form of temporary data store for processes that are to complicated to complete in a single pass. Often, these temporary stores are defined as database tables or PL/SQL tables. In Oracle, the maintenance and management of temporary tables can be delegated to the server by using Global Temporary Tables.

Also the data residing within the Global Temporary Tables can be session specific or transaction specific. Also Global Temporary Tables allows the creation of indexes on the temporary data.

142)What do you mean by Cursor Sharing?
EXACT: The database doesn't replace any literals, and the optimizer sees the query as it's presented to the optimizer. The optimizer generates a different plan for each execution of the statement, based on the literal values in the statement. The plan would thus be an optimal one, but each statement has its own parent cursor, and therefore a statement that's executed numerous times can use a considerable amount of space in the shared pool. This could potentially lead to latch contention and a slowdown in performance.

FORCE: Regardless of whether there's a histogram, the optimizer will replace the literal values with a bind value and optimize this query as if it were in the following form:

select * from <Table_name> where <Column_name>=:b

The optimizer uses a single plan for each SQL statement, regardless of the literal values. Thus, the execution plan won't be optimal, as the plan is generic, and not based on the literal values. If a query uses literal values, the optimizer will use those values to find the most efficient execution plan. If there are no literals in the SQL statement, it's very hard for the optimizer to figure out the best execution plan. By "peeking" at the value of the bind variables, the optimizer can get a better idea of the selectivity of the where clause condition—it is almost as if literals had been used in the SQL statement. The optimizer peeks at the bind values during the hard parse state. Since the execution plan is based on the specific value of the bind variable that the optimizer happened to peek at, the execution plan may not be optimal for all possible values of the bind variable.

In this example, the optimizer uses bind peeking based on the specific value of the JOB column it sees. In this case, the optimizer uses the value Clerk to estimate the cardinality for the query. When it executes the same statement (with a different value in the JOB column, say, Manager), the optimizer will use the same plan that it generated the first time (JOB=Clerk). Since there is only one parent cursor and just child cursors for the distinct statements, there's less pressure on the shared pool. Note that a child cursor uses far less space in the shared pool than a parent cursor. Often, setting the cursor_sharing parameter to FORCE immediately resolves serious latch contention in the database, making this one of the few magic bullets that can help you quickly reduce latch contention.

SIMILAR (without a histogram on the JOB column): The database will use literal replacement—it uses a system-generated bind value instead of the literal value for the JOB column (Clerk). This is because the absence of a histogram on the JOB column tells the optimizer that the data in the JOB column isn't skewed, and therefore the optimizer chooses the same plan for each execution of the statement, even though the literal values are different. The optimizer thinks it shouldn't make any changes to the execution plans for the statements that differ only in literal values because the data is uniformly distributed. The SIMILAR setting without a histogram on the columns in a query provides the same query performance and a similar impact on the shared pool as when you specify the FORCE setting.

SIMILAR (with a histogram on the JOB column): When the optimizer sees the histogram in the JOB column, it realizes that the column is skewed—this tells the optimizer that the results of the query may vary widely depending on the literal value of the JOB column. Consequently, the optimizer generates a different plan for each statement based on the literal values—thus the plans are very efficient, as in the case when you specify the EXACT setting. The SIMILAR option with a histogram in place does use more space in the shared pool, but not as much as when you use the EXACT setting. The reason for this is that each statement has its own child cursor instead of a parent cursor.

The choice among the various settings of the cursor_sharing parameter really boils down to an assessment of what's more critical to database performance: using the default EXACT setting or SIMILAR (with a histogram on the relevant column) does provide better query performance but leads to the generation of numerous parent cursors (EXACT setting) or child cursors (SIMILAR setting). If there's a severe pressure in the shared pool, and consequent latch contention, the entire database will perform poorly. Under these circumstances, you're better off implementing a system-wide solution by setting the cursor_sharing parameter to FORCE, as this guarantees that there's only a single child cursor for each SQL statement. If you're concerned about the impact of a single SQL statement, just drop the histogram on the relevant columns used in the SQL statement and set the cursor_sharing parameter to FORCE—this will ensure that the optimizer uses system-generated bind values for the column(s) and ensures that the SQL statement uses much less space in the shared pool.

Answers for Oracle Interview Questions

101)Will Context Switching hinder the performance of an application?
SQL statements are called from PL/SQL in a loop, so the execution will switch back and forth between the PL/SQL engine and the SQL engine. This switch between two environments is known as a context switch. Context switches increase elapsed time of your programs and introduce unnecessary CPU overhead. You should reduce the number of context switches by eliminating or reducing the switching between these two environments.

102)How will you overcome Context Switching in Oracle?

By reducing the number of sql and plsql calls back and forth from the sql and plsql environments.

103)What is meant by BULK COLLECT?
One method of fetching data is an Oracle bulk collect. With Oracle bulk collect, the PL/SQL engine tells the SQL engine to collect many rows at once and place them in a collection.  During an Oracle bulk collect, the SQL engine retrieves all the rows and loads them into the collection and switches back to the PL/SQL engine.  When rows are retrieved using Oracle bulk collect, they are retrieved with only two context switches.  The larger the number of rows you would like to collect with Oracle bulk collect, the more performance improvement you will see using an Oracle bulk collect.

104)What is the use of the FORALL Clause in Oracle?
The FORALL syntax allows us to bind the contents of a collection to a single DML statement, allowing the DML to be run for each row in the collection without requiring a context switch each time.

105)What are the different types of collections available in Oracle?
NESTED TABLES
ASSOCIATIVE ARRAYS(INDEX BY TABLES)
VARRAYS

Please refer to the Oracle Documentation for more information on each of the collections.

106)Can you explain the collection types one by one along with their pitfalls?

Please refer to the Oracle Documentation for more information on each of the collections.

107)What are the different memory issues encountered when using a BULK COLLECT?
It is recommended you never issue a FETCH BULK COLLECT on a resultset without a LIMIT clause when you do not know ahead of time the size (or approximate size) of the resultset.You may be thinking that the database initialization parameter PGA_AGGREGATE_TARGET will insulate your system from such problems. This is not correct. This parameter only applies to memory allocations that the database can adjust internally as required, such as memory for sorting or hashing. If you ask for 100GB of PGA for PL/SQL collection memory, THE DATABASE WILL TRY TO HONOR THAT REQUEST, NO MATTER HOW MUCH TROUBLE THAT MAY CAUSE.

108)What is the use of a LIMIT Clause?
SGA memory is shared by all sessions connected to Oracle Database, but PGA memory is allocated for each session. Fortunately, PL/SQL makes it easy for developers to control the amount of memory used in a BULK COLLECT operation by using the LIMIT clause.

109)What is AWR?
The Oracle database uses AWR for problem detection and analysis as well as for self-tuning. A number of different statistics are collected by the AWR, including wait events, time model statistics, active session history statistics, various system- and session-level statistics, object usage statistics, and information on the most resource-intensive SQL statements.

The AWR is used to collect performance statistics including:

*    Wait events used to identify performance problems.
*    Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
*    Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
*    Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
*    Object usage statistics.
*    Resource intensive SQL statements.

110)What is ADDM?
For Oracle systems, the statistical data needed for accurate diagnosis of a problem is saved in the Automatic Workload Repository (AWR). The Automatic Database Diagnostic Monitor (ADDM) analyzes the AWR data on a regular basis, then locates the root causes of performance problems, provides recommendations for correcting any problems, and identifies non-problem areas of the system. Because AWR is a repository of historical performance data, ADDM can be used to analyze performance issues after the event, often saving time and resources reproducing a problem.

The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. The findings (problems) are listed in order of potential impact on database performance, along with recommendations to resolve the issue and the symptoms which lead to it's discovery.

An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database.The obvious place to start viewing ADDM reports is Enterprise Manager.

The types of problems that ADDM considers include the following:

*    CPU bottlenecks - Is the system CPU bound by Oracle or some other application?
*    Undersized Memory Structures - Are the Oracle memory structures, such as the SGA, PGA, and buffer cache, adequately sized?
*    I/O capacity issues - Is the I/O subsystem performing as expected?
*    High load SQL statements - Are there any SQL statements which are consuming excessive system resources?
*    High load PL/SQL execution and compilation, as well as high load Java usage
*    RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency issues?
*    Sub-optimal use of Oracle by the application - Are there problems with poor connection management, excessive parsing, or application level lock contention?
*    Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive checkpoints, or sub-optimal parameter settings?
*    Concurrency issues - Are there buffer busy problems?
*    Hot objects and top SQL for various problem areas

111)What is the use of ADDM Report and Recommendations?
The types of problems that ADDM considers include the following:

*    CPU bottlenecks - Is the system CPU bound by Oracle or some other application?
*    Undersized Memory Structures - Are the Oracle memory structures, such as the SGA, PGA, and buffer cache, adequately sized?
*    I/O capacity issues - Is the I/O subsystem performing as expected?
*    High load SQL statements - Are there any SQL statements which are consuming excessive system resources?
*    High load PL/SQL execution and compilation, as well as high load Java usage
*    RAC specific issues - What are the global cache hot blocks and objects; are there any interconnect latency issues?
*    Sub-optimal use of Oracle by the application - Are there problems with poor connection management, excessive parsing, or application level lock contention?
*    Database configuration issues - Is there evidence of incorrect sizing of log files, archiving issues, excessive checkpoints, or sub-optimal parameter settings?
*    Concurrency issues - Are there buffer busy problems?
*    Hot objects and top SQL for various problem areas

A problem finding can be associated with a list of RECOMMENDATIONs for reducing the impact of the performance problem. Each recommendation has a benefit which is an estimate of the portion of DB time that can be saved if the recommendation is implemented. A list of recommendations can contain various alternatives for solving the same problem; you do not have to apply all the recommendations to solve a specific problem.

112)What is SQL Performance Analyzer?

Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

*    Database, operating system, or hardware upgrades.
*    Database, operating system, or hardware configuration changes.
*    Database initialization parameter changes.
*    Schema changes, such as adding indexes or materialized views.
*    Refreshing optimizer statistics.
*    Creating or changing SQL profiles.

113)When exactly do you go in for a SQL Performance Analyzer?

Whenever we want compare the performance of the statements in a tuning set before and after a database change whether major or minor, we can make use of the SQL Performance Analyzer.

114)What is ASH?

Retrieving ASH information is necessary if you need to get session information more current than you can retrieve from the AWR report.Again, AWR information is generated only hourly by default. ASH information is gathered every second from V$SESSION, and stores the most useful session information to help gauge database performance at any given moment.

You may need to view the DBA_HIST_ACTIVE_SESS_HISTORY historical view in order to get the ASH information you need if your database is very active.The MMON background process, which manages the AWR hourly snapshots, also flushes ASH information to the historical view at the same time.If there is heavy activity on the database, and the buffer fills between the hourly AWR snapshots, the MMNL background process will wake up and flush the ASH data to the historical view.

115)What are the different Views that can be Queried for getting info on ASH?
ASH information is gathered every second from V$SESSION, and stores the most useful session information to help gauge database performance at any given moment.
You may need to view the DBA_HIST_ACTIVE_SESS_HISTORY historical view in order to get the ASH information you need if your database is very active.

116)What is meant by Latch?

Latches are lightweight serialization devices used to coordinate multiuser access to shared data structures, objects, and files.
Latches are locks designed to be held for extremely short periods of time¿for example, the time it takes to modify an in-memory data structure. They are used to protect certain memory structures, such as the database block buffer cache or the library cache in the shared pool. Latches are typically requested internally in a 'willing to wait' mode. This means that if the latch is not available, the requesting session will sleep for a short period of time and retry the operation later. Other latches may be requested in an 'immediate' mode, which is similar in concept to a SELECT FOR UPDATE NOWAIT, meaning that the process will go do something else, such as try to grab an equivalent sibling latch that may be free, rather than sit and wait for this latch to become available. Since many requestors may be waiting for a latch at the same time, you may see some processes waiting longer than others. Latches are assigned rather randomly, based on the luck of the draw, if you will. Whichever session asks for a latch right after it was released will get it. There is no line of latch waiters¿just a mob of waiters constantly retrying.
Oracle uses atomic instructions like 'test and set' and 'compare and swap' for operating on latches. Since the instructions to set and free latches are atomic, the operating system itself guarantees that only one process gets to test and set the latch even though many processes may be going for it simultaneously. Since the instruction is only one instruction, it can be quite fast. Latches are held for short periods of time and provide a mechanism for cleanup in case a latch holder 'dies' abnormally while holding it. This cleanup process would be performed by PMON.

Latch Spinning

Waiting for a latch can be an expensive operation. If the latch is not available immediately and we are willing to wait for it, as we likely are most of the time, then on a multi-CPU machine our session will spin trying over and over, in a loop, to get the latch. The reasoning behind this is that context switching (i.e., getting 'kicked off' the CPU and having to get back on the CPU) is expensive. So, if the process cannot get a latch immediately, we¿ll stay on the CPU and try again immediately rather than just going to sleep, giving up the CPU, and trying later when we¿ll have to get scheduled back on the CPU. The hope is that the holder of the latch is busy processing on the other CPU (and since latches are designed to be held for very short periods of time, this is likely) and will give it up soon. If after spinning and constantly trying to get the latch, we still fail to obtain it, only then will our process sleep, or take itself off of the CPU, and let some other work take place.

117)How do you overcome Latches in Oracle?
We can overcome latch contention by setting the DB_BLOCK_LRU_LATCHES parameter, increasing the number of latches to service when needed thus decreasing the latch contention.


118)What is the use of Database Replay?
You can use Database Replay to capture a workload on the production system and replay it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. Database Replay supports workload capture on a system running Oracle Database 10g Release 2 and newer releases. In order to capture a workload on a system running Oracle Database 10g Release 2, the database version can be 10.2.0.4 or higher. Workload replay is only supported on systems running Oracle Database 11g Release 1 and newer releases.


119)How do you accept a Profile Recommended by ADDM?
To configure automatic SQL tuning, run the SET_TUNING_TASK_PARAMETER procedure in the DBMS_SQLTUNE package:
BEGIN
  DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER(
    task_name => 'SYS_AUTO_SQL_TUNING_TASK',
    parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE');
END;
/
In this example, the automatic SQL tuning task is configured to automatically accept SQL profiles recommended by the SQL Tuning Advisor.

120)Any idea on the Database Bufffer Cache?
The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All user processes concurrently connected to the instance share access to the database buffer cache.

The database buffer cache and the shared SQL cache are logically segmented into multiple sets. This organization into multiple sets reduces contention on multiprocessor systems.

Oracle Database uses the buffer cache to store blocks read from disk. ORACLE DATABASE BYPASSES THE BUFFER CACHE FOR PARTICULAR OPERATIONS, SUCH AS SORTING AND PARALLEL READS.

The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed.

When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list.

The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.

Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.

If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.

If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk.

121)Can you throw some light on the Data Dictionary and the Library Cache?
The library cache includes the shared SQL areas, private SQL areas (in the case of a multiple transaction server), PL/SQL procedures and packages, and control structures such as locks and library cache handles.
Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool within the SGA.

The data dictionary is a collection of database tables and views containing reference information about the database, its structures, and its users. Oracle accesses the data dictionary frequently during SQL statement parsing. This access is essential to the continuing operation of Oracle.

The data dictionary is accessed so often by Oracle that two special locations in memory are designated to hold dictionary data. One area is called the data dictionary cache, also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data). The other area in memory to hold dictionary data is the library cache. All Oracle user processes share these two caches for access to data dictionary information.

122)How do you overcome Data Dictionary and Library Cache Contention?
Whenever new statements are added to the library cache area, this type of latch must be acquired by Oracle as part of the operation. Oracle scans the library cache area during the parse phase for matching SQL statements. If one is not found, then Oracle will complete the task, obtain the library cache latchfor the SQL statement and then insert it into the library cache area for future usage by the Oracle database.

Contention due to the library cache latches as well as shared pool latches is usually due to applications not using bind variables.You can set the CURSOR_SHARING parameter to force Oracle to use bind variables, even if your application hasn't specified them in the code. You can choose between a setting of FORCE or SIMILAR for this parameter to force the substituting of bind variables for hard-coded values of variables. The default setting for this parameter is EXACT, which means that the database won't substitute bind variables for literal values. When you set the CURSOR_SHARING parameter to FORCE, Oracle converts all literals to bind variables. The SIMILAR setting causes a statement to use bind variables only if doing so doesn't change a statement's execution plan. Thus, the SIMILAR setting seems a safer way to go about forcing the database to use bind variables instead of literals.

The library cache contention usually disappears once you set the CURSOR_SHARING parameter to FORCE or to SIMILAR.

There are two types of latches: “willing to wait” latches (example is a library cache latch) and “not willing to wait” latches (an example is a redo copy latch). A process that is willing to wait will try to acquire a latch. If none are available, it will spin and then request the latch again. It will continue to do this up to the _SPIN_COUNT initialization parameter (note that spinning costs CPU). If it can’t get a latch after spinning up to the _SPIN_COUNT, it will go to sleep, not do anything for a while, and then will wake up after one centisecond (one hundredth of a second). It will do this twice. It will then start this process again, spinning up to the _SPIN_ COUNT and then sleeping for twice as long (two centiseconds). After doing this again, it will double again. So the pattern is 1, 1, 2, 2, 4, 4, etc. It will do this until it gets the latch. Every time the latch sleeps, it will create a latch sleep wait. An example of a “willing to wait” latch is a library cache latch.

Data Dictionary Cache Contention

The "row cache objects" is a serialization latch that protects the access to the data dictionary cache in the SGA. This latch is acquired whenever you are referencing metadata objects in the data dictionary cache.

The Row Cache Objects Latch is a correlated latch to the shared pool latch is the row cache objects latch. The purpose of this latch is to provide data protection for user processes that access the cached data dictionary area within Oracle.

The row cache object latch can also indicate SQL with excessive hard parsing or excessive reliance on data dictionary information such as views row-level security, synonyms, etc.  The general solution for row cache latch waits is to increase shared_pool_size. 

Answers for Oracle Interview Questions

82)What is meant by External Tables in Oracle?
The external tables feature is a complement to existing SQL*Loader functionality. It enables you to access data in external sources as if it were in a table in the database.

Prior to Oracle Database 10g, external tables were read-only. However, as of Oracle Database 10g, external tables can also be written to.

A feature has been added to external tables that allows users to preprocess input data before it is sent to the access driver.Preprocessing also requires some special precautions on the part of the DBA. Because the executables called by preprocessing programs will be executed under the privileges of the Oracle software owner and malicious executable code can cause a lot of damage, the DBA should be extremely careful in monitoring executables for potentially harmful code.

with external tables you can

o merge a flat file with an existing table in one statement.
o sort a flat file on the way into a table you want compressed nicely.
o do a parallel direct path load -- without splitting up the input file, writing
  umpteen scripts and so on
o run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
o do multi-table inserts
o flow the data through a pipelined plsql function for cleansing/transformation

Differences you find between SQL Loader and External Tables
===========================================================

*    If there are multiple primary input datafiles with SQL*Loader loads, a bad file and a discard file are created for each input datafile. With external table loads, there is only one bad file and one discard file for all input datafiles.

Unloading in the context of external tables means creating an external table using the CTAS method.

83)Can we fire DML Statements on External Tables?
NO DML and Indexing is supported for External tables even in 11G.

84)When should you go in for an External Table?
External tables has wide variety of advantages when compare to sql loader and also in the case of external tables, the data resides outside the database where as the data loaded using sql loader will reside inside the database. So when space is a constraint, then prolly going in for an external table is much better than loading data using sql loader.

Also data in the external table doesn't need an oracle backup since the flat file resides outside the database and a normal file backup is fine which can be taken care by the OS Admin.

85)What is meant by PLSQL Profiler?
The DBMS_PROFILER package provides an interface to profile existing PL/SQL applications and identify performance bottlenecks. You can then collect and persistently store the PL/SQL profiler data.

86)What is the use of SQL Profiler Tool in Oracle?
The DBMS_PROFILER package provides an interface to profile existing PL/SQL applications and identify performance bottlenecks.With this interface, you can generate profiling information for all named library units that are executed in a session. The profiler gathers information at the PL/SQL virtual machine level. This information includes the total number of times each line has been executed, the total amount of time that has been spent executing that line, and the minimum and maximum times that have been spent on a particular execution of that line.

87)Have you heard of SQL Tuning Advisor and SQL Access Advisor?

Use the SQL Tuning Advisor for tuning SQL statements. Typically, you can run this advisor in response to an ADDM performance finding that recommends its use.
Additionally, you can run the SQL Tuning Advisor on the most resource-intensive SQL statements, as well as on a SQL workload.

The SQL Access Advisor helps define appropriate access structures such as indexes and materialized views to optimize SQL queries. The advisor takes a SQL workload as an input and recommends which indexes, materialized views, or logs to create, drop, or retain for faster performance. You can select your workload from different sources including current and recent SQL activity, a SQL repository, or a user-defined workload such as from a development environment.

The recommendations that this advisor makes include possible indexes, materialized views, or materialized view logs that can improve your query performance for the given workload.

90)What is the use of the SQL Performance Analyzer?

Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

•Database, operating system, or hardware upgrades.
•Database, operating system, or hardware configuration changes.
•Database initialization parameter changes.
•Schema changes, such as adding indexes or materialized views.
•Refreshing optimizer statistics.
•Creating or changing SQL profiles.


91)What do you mean by Fragmentation?

Tom kyte's defininition of fragmentation is that you have many "small" holes (regions of contigous free space) that are too small to be the NEXT extent of any object.  These holes of free space resulted from dropping some objects (or truncating them) and the resulting free extents cannot be used by any other object in that tablespace.  This is a direct result of using a pctincrease that is not zero and having many wierd sized extents (every extent is a unique size and shape)

92)What is meant by Shared pool Fragmentation?

Performance degradation occurs in the shared pool in situations where pool fragmentation forces Oracle to search for and free chunks of unused pool to satisfy the current request.Fragmentation of the shared pool can lead to memory not being allocated.

Shared Pool Latch Contention is almost always due to Shared pool fragmentation. Process will hold the shared pool latch for a long time while searching for a free chunk of memory when the shared pool is fragmented.The shared pool may be fragmented into many smaller free chunks. But whenever there is a request for a large piece of memory, then the recreatable or freeable chunks of memory have to flushed from the shared pool so that they can be coalesced to a large chunk of free memory which is able to statisfy the request.That coalescing might involve walking thro shared pool freelists many times and flushing freeable chunks until a large chunk able to statisfy the request is created.

93)What happens on the Oracle Backend whenever you fire a Query?

94)What is parsing?

Whenever a statement is executed, Oracle follows a methodology to evaluate the statement in terms of syntax, validity of objects being referred and of course, privileges to the user. Apart from this, Oracle also checks for identical statements that may have been fired, with the intention of reducing processing overheads. All this takes place in a fraction of a second, even less, without the user knowing what is happening to the statement that was fired. This process is known as Parsing.

Syntax parse - Oracle parses the syntax to check for misspelled SQL keywords.
Semantic parse - Oracle verifies all table & column names from the dictionary and checks to see if you are authorized to see the data.

95)Can you explain the Parse, Bind , Execute and Fetch Phases?

We can refer to the DBA Study guide for more information.

96)What is a Cursor?
To execute a multi-row query, Oracle opens an unnamed work area that stores processing information. You can access this area through an explicit cursor, which names the work area, or through a cursor variable, which points to the work area. To create cursor variables, you define a REF CURSOR type, then declare cursor variables of that type.

Oracle implicitly opens a cursor to process each SQL statement not associated with an explicit cursor. In PL/SQL, you can refer to the most recent implicit cursor as the SQL cursor, which always has the attributes %FOUND, %ISOPEN, %NOTFOUND, and %ROWCOUNT. They provide information about the execution of data manipulation statements. The SQL cursor has additional attributes, %BULK_ROWCOUNT and %BULK_EXCEPTIONS, designed for use with the FORALL statement.

97)What are the different Types of Cursors?
Implicit Cursor
Explicit Cursor
REF Cursor
BULK COLLECT and FOR ALL Cursor
Parameterized Cursor

98)What is a parameterized Cursor?

99)What is a reference Cursor?

A cursor variable is a pointer that distinguishes the current row in a resultset from a multi-row query.  Cursor variables have several advantages over explicit cursors including:

*    Cursor variables can point to a variety of queries provided the queries have a suitable return type.  In contrast, explicit cursors are tied to individual queries.
*    Cursor variables allow cursors to be opened independently of being processed.
*    Cursor variables can be passed as parameters between application layers, as well as between server side components.
*    Cursor variables can be used to reduce client-server network traffic by allowing several cursors to be opened on the server in a single round trip.

100)What is meant by Context Switching?
When we consider "CONTEXT SWITCHING" in relation to Oracle Database, we specifically refer to the exchange of processing control between the SQL and PL/SQL engines (without necessarily understanding what occurs in such a process). These two engines are separate and distinct but we use them interchangeably. This means that when we call SQL from PL/SQL or vice versa, the calling context needs to store its process state and hand over control and data to its counterpart engine (which may or may not be picking up from an earlier switch). This switching cycle is computationally intensive and can typically be repeated so many times that its effects on response times can become quite noticeable.

WHEN A SQL STATEMENT IS WRAPPED IN A PL/SQL FUNCTION AND CALLED FROM SQL QUERIES, THE COST OF CONTEXT-SWITCHING IS AUTOMATICALLY DOUBLED—IT NOW OCCURS BOTH INSIDE AND OUTSIDE THE FUNCTION.


SQL statements are called from PL/SQL in a loop, so the execution will switch back and forth between the PL/SQL engine and the SQL engine. This switch between two environments is known as a context switch. Context switches increase elapsed time of your programs and introduce unnecessary CPU overhead. You should reduce the number of context switches by eliminating or reducing the switching between these two environments.

Answers for Oracle Interview Questions

75)What is the difference that you find with a Locally managed and a Dictionary managed Tablespace?

Benefits of LMTs
Below are the key benefits offered by LMTs. Not all are achievable when migrating to LMTs.

Dictionary contention is reduced.

Extent management in DMTs is maintained and carried out at the data dictionary level. This requires exclusive locks on dictionary tables. Heavy data processing that results in extent allocation/deallocation may sometimes result in contentions in the dictionary.

Extents are managed at the datafile level in LMTs. Dictionary tables are no longer used for storing extent allocation/deallocation information. The only information still maintained in the dictionary for LMTs is the tablespace quota for users.

Space wastage removed.

In DMTs, there is no implied mechanism to enforce uniform extent sizes. The extent sizes may vary depending on the storage clause provided at the object level or the tablespace level, resulting in space wastage and fragmentation.

Oracle enforces the uniform extents allocation in the LMTs (when created with UNIFORM SIZE clause). Space wastage is removed, as this would result in all the same sized extents in the tablespace.

No Rollback generated.

In DMTs, all extent allocations and deallocations are recorded in the data dictionary. This generates undo information thus using vital resources and may compete with other processes.

In LMTs, no rollback is generated for space allocation and deallocation activities.

ST enqueue contention reduced.

In DMTs, Space Transaction (ST) enqueue is acquired when there is a need for extent allocations in DMTs. It is also exclusively acquired by SMON process for coalescing free space in DMTs. Only one such enqueue exists per instance, and may sometimes result in contention and performance issues if heavy extent processing is being carried out. The following error is common in such scenario.

ORA-01575: timeout warning for space management resource
As ST enqueue is not used by LMTs it reduces the overall ST enqueue contention.

Recursive space management operations removed.

In DMTs, SMON process wakes up every 5 minutes for coalescing free space in DMTs. Optionally, the ALTER TABLESPACE <tablespace name> COALESCE command is also used to coalesce DMTs and reduce fragmentation.

On the other hand, LMTs avoid recursive space management operations and automatically track adjacent free space, thus eliminating the need to coalesce free extents. This further reduces fragmentation.

Fragmentation reduced.

Fragmentation is reduced in LMTs but not completely eliminated. Since adjacent free spaces are automatically tracked, there is no need to do coalescing, as is required in the case of DMTs.

Storage Parameters in DMT's

Storage parameters are used in DMTs to specify the object sizing. These parameters are not of much importance in UNIFORM type LMTs but play a role in deciding the initial allocation of space.

76)What do you mean by ASSM?

Prior to Oracle 9i, each block had to be read so the freelist could be checked to see if there was room in the block. In 9i, the bitmap can be checked reducing the number of blocks read unneccessarily.

Starting from Oracle9i, the PCTUSED, FREELISTS, and FREELIST GROUPS parameters are ignored with locally managed tablespaces and ASSM.The ASSM tablespace is implemented by adding the SEGMENT SPACE MANAGEMENT AUTO clause to the tablespace definition syntax. ASSM tablespaces automate freelist management by replacing the traditional one-way linked-list freelists with bitmap freelists, and remove the ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes.

In an LMT, Oracle moves the tablespace information out of the data dictionary tablespace and stores it directly within the tablespace itself. The second major tablespace enhancement in Oracle9i, was automatic segment space management (ASSM). a.k.a. bitmap freelists.   With ASSM, the linked-list freelists are replaced with bitmaps, a binary array that turns out to be very fast and efficient for managing storage extents and free blocks, thereby improving segment storage internals.

Automatic free space management is only available in locally managed tablespaces. It removes the need for managing freelists and freelist groups by using bitmaps to describe the space usage of each block is within a segment. The bitmap is stored in separate blocks known as bitmapped blocks (BMBS). This relieves the contention on the segment header that occurs with freelists.

With ASSM, the linked-list freelists are replaced with bitmaps, a binary array that turns out to be very fast and efficient for managing storage extents and free blocks, thereby improving segment storage internals. Usage of ASSM causes some performance hindrance but this has been certainly overcome with the 11G Release 2.ASSM tablespaces automate freelist management by replacing the traditional one-way linked-list freelists with bitmap freelists, and remove the ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes.

Without multiple freelists, every Oracle table and index had a single data block at the head of the table to manage free blocks for the object and provide data blocks for new rows created by any SQL insert statements. A buffer busy wait occurs when a data block is inside the data buffer cache but is unavailable because it is locked by another DML transaction.  When you want to insert multiple tasks into the same table, the tasks are forced to wait while Oracle assigned free blocks, one at a time.

With ASSM, Oracle claims to improve the performance of concurrent DML operations significantly since different parts of the bitmap can be used simultaneously, eliminating serialization for free space lookups.Segment space management auto is a good method for handling objects with varying row sizes.

Several studies have shown that large-table full-table scans (FTS) will run longer with ASSM than standard bitmaps. ASSM FTS tablespaces are consistently slower than freelist FTS operations. This implies that ASSM may not be appropriate for decision support systems and warehouse applications unless partitioning is used with Oracle Parallel Query.

Large objects cannot use ASSM, and separate tablespaces must be created for tables that contain LOB datatypes. You cannot create a temporary tablespace with ASSM. This is because of the transient nature of temporary segments when sorting is performed.

Only locally managed tablespaces can use bitmap segment management.

When an object such as a table or index is created using the locally managed tablespace, with automatic segment-space management enabled, there is no need to specify the pctused or freelists.

PCTFREE is still required even with ASSM because it determines how rows will be packed into blocks.

FREELIST
This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table. When an insert is being done, Oracle gets the next block on the freelist and uses it for the insert. When multiple inserts are requested from multiple processes, there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist, until it is full, and inserting into it. Depending on how much contention you can live with, you need to determine how many freelists you need so that the multiple processes can access their own freelist.

AUTO - This keyword tells Oracle that you want to use bitmaps to manage the free space within segments. A bitmap, in this case, is a map that describes the status of each data block within a segment with respect to the amount of space in the block available for inserting rows. As more or less space becomes available in a data block, its new state is reflected in the bitmap. Bitmaps allow Oracle to manage free space more automatically, and thus, this form of space management is called automatic segment-space management.

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water mark.

77)How come the performance of an Query increase by setting the Segment Space Management as AUTO?

With ASSM, Oracle claims to improve the performance of concurrent DML operations significantly since different parts of the bitmap can be used simultaneously, eliminating serialization for free space lookups.Segment space management auto is a good method for handling objects with varying row sizes.

78)What is the memory architecture of Oracle?

You can explore the same from the DBA Study guide.

79)What is SGA and PGA?

SGA ( System Global Area )

A system global area (SGA) is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, then the data in the instance's SGA is shared among the users. Consequently, the SGA is sometimes called the shared global area.
An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.
The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance's SGA, and several processes write to the SGA during execution of Oracle.

The SGA contains the following data structures:
*    Database buffer cache
*    Redo log buffer
*    Shared pool
*    Java pool
*    Large pool (optional)
*    Streams pool

PGA ( Program Global Area )

Oracle Database allocates a program global area (PGA) for each server process. The PGA is used to process SQL statements and to hold logon and other session information. For the purposes of memory management, the collection of all PGAs is known as the instance PGA. Using an initialization parameter, you set the size of the instance PGA, and the database distributes memory to individual PGAs as needed.


The content of the PGA memory varies, depending on whether or not the instance is running the shared server option. Generally speaking, the PGA memory is divided into the following areas:
*    Session Memory
*    Private SQL Area

The Private SQL Area section includes the following topics:
*    Cursors and SQL Areas
*    Private SQL Area Components
*    SQL Work Areas

80)What are the components of the SGA and the PGA?

The SGA contains the following data structures:
*    Database buffer cache
*    Redo log buffer
*    Shared pool
*    Java pool
*    Large pool (optional)
*    Streams pool

The content of the PGA memory varies, depending on whether or not the instance is running the shared server option. Generally speaking, the PGA memory is divided into the following areas:
*    Session Memory
*    Private SQL Area

The Private SQL Area section includes the following topics:
*    Cursors and SQL Areas
*    Private SQL Area Components
*    SQL Work Areas

81)What is meant by sga_target and pga_aggregate_target and memory_target and what are the benefits of the same?

*    If you're using automatic memory management by setting the MEMORY_TARGET parameter, Oracle allocates 0.25% of the MEMORY_TARGET parameter's value to the server result cache.
*    If you're using automatic shared memory management with the SGA_TARGET parameter, the allocation is 0.5% of the SGA_TARGET parameter.
*    If you're using manual memory management by setting the SHARED_POOL_SIZE parameter, the allocation is 1% of the SHARED_POOL_SIZE parameter.