Metalink Master Note ID

 

Master Index for Grid Control Jobs(1303562.1)
Oracle Enterprise Manager(1279403.1:README)

Master Note: Overview of Oracle Segment Storage (Doc ID 1491960.1)

Different Kinds of Temporary Segments (Doc ID 181132.1)
Master Note For Oracle Recovery Manager (RMAN) (Doc ID 1116484.1)

Rman backup retention policy [ID 462978.1]
Frequently asked questions on Rman backup retention policy [ID 463875.1]
NOTE:206862.1 – How does the RMAN Retention Policy Obsolete Incremenatal Backupsets and Archivelog Files
NOTE:351455.1 – Oracle Suggested Strategy & Backup Retention
NOTE:434345.1 – How Do I Set A Retention Policy For Tape Backups And Disk Backups Differently?
NOTE:452529.1 – Recovery catalog for RMAN backup

How To Check (Validate) If RMAN Backup(s) Are Good (Doc ID 338607.1)

 

http://coskan.wordpress.com/2007/09/14/what-i-learned-about-shared-pool-management/

 

Note:396940.1 Troubleshooting and Diagnosing ORA-4031 Error.

 

Process on Primary :

1# LGWR/ARCH –  Redo or Arch logs from Primary are shipped to stand by site by LGWR/ARCH process.

2# FAL: – Fetch Archive Log, When there is a communication loss b/w Primary & Stand by (e.g. Network Failure) the logs shipping is affected and there may be gaps, when the communication is established again the FAL processes enable the automatic gap resolution and resync process.

3# LNS – Logwriter Network Service, With Oracle 11g, by default, LNS is used on the primary to initiate a connection with the standby, the standby listener responds by spawning a process called the remote file server (RFS).

process on Stand by:

RFS – Remote File Server receives the redo log records from primary site.

The RFS process will create a network connection with the primary and will sit waiting for data to arrive. Once data begins arriving from the primary, the RFS process will place it either into standby redo logs or into archive redo logs.

MRP – Managed Recovery Process applies redo/archive log to STAND BY.

LSP – Logical standby process, translates the redo records into SQL and fires the SQL to the logical standby db.

if DG BROKER is used, then  DMON will run on both PRIMARY AND STAND BY.

Oracle Database 12c Release 1 – Installing Oracle Database Software and Creating a Database step by step guide (Doc ID 1566423.1)

SQL:

  • Increased size limit for VARCHAR2, NVARCHAR2, and RAW datatypes to 32K  (from 4K).
  • We can make a column invisible.
    SQL> create table test (column-name column-type invisible);
    SQL> alter table table-name modify column-name invisible;
    SQL> alter table table-name modify column-name visible;
  • Oracle Database 12c has new feature called “Identity Columns” which are auto-incremented at the time of insertion (like in MySQL).
    SQL> create table dept (dept_id number generated as identity, dept_name varchar);
    SQL> create table dept (dept_id number generated as identity (start with 1 increment by 1 cache 20 noorder), dept_name varchar);
  • Temporary undo (for global temporary tables) will not generate undo.
  • No need to shutdown database for changing archive log mode.
  • Duplicate Indexes – Create duplicate indexes on the same set of columns. Till Oracle 11.2, if we try to create an index using the same columns, in the same order, as an existing index, we’ll get an error. In some cases, we might want two different types of index on the same data (such as in a datawarehouse where we might want a bitmap index on the leading edge of a set of columns that exists in a Btree index).
  • PL/SQL inside SQL: this new feature allows to use DDL inside SQL statements (i.e.: to create a one shot function)
  • The object DEFAULT clause has been enhanced. Adding a column to an existing table with a default value (much faster with Oracle 12c and it consumes less space than before, pointer to the Oracle Data Dictionary), applies also to sequences, identity types etc…
  • Pagination query, SQL keywords to limit the number of records to be displayed, and to replace ROWNUM records.
    select … fetch first n rows only;
    select … offset m rows fetch next n rows only;
    select … fetch first n percent rows only;
    select … fetch first n percent rows with ties;
  • Moving and Renaming datafile is now ONLINE, no need to put datafile in offline.
  • The TRUNCATE command has been enhanced with a CASCADE option which follows child records.
  • Reduces contents of regular UNDO, allowing better flashback operations.

PL/SQL:

  • PL/SQL Unit Security – A role can now be granted to a code unit. That means you can determine at a very fine grain, who can access a specific unit of code.
  • SQL WITH Clause Enhancement – In Oracle 12c, we can declare PL/SQL functions in the WITH Clause of a select statement.
  • Implicit Result Sets – create a procedure, open a ref cursor, return the results. No types, not muss, no mess. Streamlined data access (kind of a catch up to other databases).
  • MapReduce in the Database – MapReduce can be run from PL/SQL directly in the database.
  • We can use Booleans values in dynamic PL/SQL. Still no Booleans as database types.


Database:

  • New background processes – LREG (Listener Registration), SA (SGA Allocator), RM
  • Oracle Database 12c Data Pump will allow turning off redo for the import operation (only).
  • Enhanced statistics (Hybrid histograms for more than 254 distinct values, dynamic sampling up to eleven, and stats automatically gathered during load).
  • Row pattern matching – “MATCH_RECOGNIZATION” (identification of patterns within a table ordered/sorted by the SQL statement).
  • Adaptive execution plans (change of the plan at runtime).
  • Oracle 12c includes database level redaction, allowing granular control of access to sensitive data.
  • Oracle introduced parameter PGA_AGGREGATE_LIMIT which is a real/PGA memory limit.
  • UNDO for temporary tables can now be managed in TEMP, rather than the regular UNDO tablespace.
  • Oracle Enterprise Manage Express (lightweight EM Cloud Control 12c version), replaces the Oracle Database console and, is installed automatically.
  • Reduces the size of redo associated with recovering the regular UNDO tablespace.


ASM: (Oracle Grid Infrastructure new features)

  • Introduction of Flex ASM, ASM would run only on 3 instances on a cluster even if more than 3, the instances that not have an ASM would request remote ASM, typically like SCAN. In normal conditions in a node if ASM fails the entire node will be useless, where in 12c the ability to get the extent map from remote ASM instance makes the node useful.
  • Introduction of Flex Cluster, with light weight cluster stack, leaf node and traditional stack hub node, application layer is the typical example of leaf nodes where they don’t require any network heartbeat.


RMAN:

  • RMAN TABLE Point-In-Time Recovery (combination of Data Pump and RMAN, auxiliary instance required).
  • Running SQL commands in RMAN without SQL keyword.
  • Recover or copy files from Standby databases.

Refresh a single datafile on the primary from the standby (or standby from primary).

  • Table level restoration i.e object level.
  • Incremental recovery more faster, many of the tasks removed.
  • Rolling forward/Synchronizing a standby database.


Partitioning:

  • Partitioning enhancements (partition truncate, cascading, global index cleanup, online moving of a partition, …)
  • Multiple partition operations in a single DDL.
  • Interval-Ref Partitions – we can create a ref partition (to relate several tables with the same partitions) as a sub-partition to the interval type.
  • Cascade for TRUNCATE and EXCHANGE partition.
  • Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later.
  • Online move of a partition(without DBMS_REDEFINTIION).


Patching:

  • Centralised patching.
  • We can test patches on database copies, rolling patches out centrally once testing is complete.


Compression:
Automated compression with heat map.

Optimisation can be run on live databases with no disruption. Data optimization will monitor the data usage and with policy archive old data and hot data will be compressed for faster access. Inactive data can be more aggressively compressed or archived, greatly reducing storage costs.

Advanced Row compression (for Hot Data).
Columnar Query compression (for Warm Data).
Columnar Archive compression (for Archive Data).

Data Guard:
Oracle Database 12c introduces a new redo transportation method which omits the acknowledgement (to primary) of the transaction on the standby. This feature is called “Fast Sync” redo transport.

Creating a new type of redo destination called “Far Sync Standby”. A “Far Sync Standby” is composed only of the standby control files, the standby redo logs and some disk space for archive logs which shall be sent to the Standby database. Failover & Switchover operations are totally transparent as the “Far Sync Standby” cannot be used as the target.

Data Guard Broker commands have been extended. The “validate database” command to checks whether the database is ready for role transition or not.

Dataguard Broker now supports cascaded standby.

 

Global Temporary Tables can now be used on an Active Guard standby database.

Pluggable Databases:
In Oracle 12c, in a pluggable database environment, we can create a single database container, and plug multiple databases into this container. All these databases then share the exact same oracle server/background processes and memory, unlike the previous versions where each database has its own background processes and shared memory. This helps in database consolidation and reduces the overhead of managing multiple desperate databases.

Container Database (CDB): Are the core data dictionary objects that come after an Oracle database installation.
Pluggable Database (PDB): Data dictionary objects and data related to the application. We can have many PDB plugged into a single CDB.

A new admin role “CDB Administrator” has been introduced in Oracle 12.1 release databases.
Multiple databases can then share a master LGWR process, but have their own dedicated LGWR process within the container.

All Oracle database options/features are available on the PDB level.

RMAN backup at CDB level.

We can unplug a PDB from a CDB to another CDB.
PDB’s can be cloned inside the CDB.
Management of PDB (clone/creation/plug/unplug/drop) are implemented as SQLs.
Extremely fast PDB-provisioning (clone inside the CDB), because each CDB comes with a “PDB Seed”.
Database patch/upgrade management very quick as CDB is a single point of installation.
Each PDB has its own data dictionary.
Data Guard configuration on CDB as whole.
RMAN point-in-time recovery at PDB level (while other PDB’s remains open).
Resource Manager is extended for creating, unplugging, plugging in, and cloning, dropping or even setting up for the open mode of the PDB.
Flashback of a PDB should be available for Oracle 12c Release 2.

Entire containers can be backed up in single run, regardless of how many databases they contain.
Upgrade one container database and all pluggable databases are upgraded.

New Commands
create pluggable database …
alter pluggable database …
drop pluggable database …

New Views/Packages in Oracle 12c Release1
dba_pdbs
v$pdbs
cdb_data_files

dbms_pdb
dbms_qopatch

Oracle Transparent Data Encryption (TDE) enables you to encrypt individual columns that hold sensitive application data, or entire application tablespaces.
TDE transparently encrypts data when it is written to disk and decrypts it when it is read back to the authorized user and/or application. Applications don’t have to be modified to take advantage of this feature. With TDE Column Encryption, each table with encrypted columns has it’s own encryption key (table key), that is used for all encrypted columns in that table regardless of the number of encrypted columns. These table keys are stored in the data dictionary, and are encrypted with the master encryption key, which is stored outside of the Oracle database, either in the Oracle Wallet file, or a Hardware Security Module (HSM). No keys are stored in clear text.

TDE Tablespace Encryption enables you to encrypt entire application tablespaces. All objects created in the encrypted tablespaces are automatically encrypted. Tablespace encryption has the following benefits over TDE Column Encryption:
– No increase in storage requirements
• – True transparency, no change in execution plans
• – No need to identify individual columns for encryption
• Support of all data types and index types.

On this page, the location of the Oracle Wallet can be defined; master keys can be re-keyed or migrated to Hardware Security Modules (HSM), the Oracle Wallet password can be changed, or a (local) auto-open wallet can be generated.
Once the master encryption key is created (either in Oracle Wallet, or HSM), click on the ‘Tables’ or ‘Tablespaces’ links under ‘Related Links’ on this page to encrypt your application data, or the ‘Import and Export’ link to encrypt export files.
The Transparent Data Encryption (TDE) feature introduced in Oracle 10g Database Release 2 allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system.
Oracle Database 11g encrypts data using a master key, which is stored in a secure location called a wallet, which is a file on the database server. Table keys are placed in the data dictionary.
Oracle Database 11g generates a single encrypted table key for the table and uses it to encrypt those columns.

Implementation Steps:

1. Add the following entry in sqlnet.ora under $ORACLE_HOME/network/admin

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=file)
(METHOD_DATA=(DIRECTORY=/u01/app/oracle/wallets)))

2. Before attempting to create a table with encrypted columns, a wallet must be created to hold the encryption key.

Creates the wallet in the location specified in sqlnet.ora i.e. /u01/app/oracle/wallets

Set the password of the wallet as “passwd”.

Open the wallet for TDE

[oracle@db wallets]$ pwd
/u01/app/oracle/wallets

[oracle@db wallets]$ ls -lrt
total 12
-rw——- 1 oracle oinstall 8757 May 6 12:56 ewallet.p12

3. You need to open the wallet and create the master encryption key. Open a SQL*Plus session and execute the following commands:
connect / as sysdba
SQL> alter system set key identified by “finnet1”;

4. Open the wallet as below :
CONN sys/passwd@dbname AS SYSDBA
SQL> ALTER SYSTEM SET ENCRYPTION KEY AUTHENTICATED BY “myPassword”;
Or
The wallet must be opened explicitly, after the database instance starts.
SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN AUTHENTICATED BY “passwd”;
SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY “passwd”;

5. The status of the wallet can be viewed using the view v$encryption_wallet:

SQL> SELECT * FROM v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS
——————– —————————— ———
file /u01/app/oracle/wallets OPEN
Encrypted Column
——————–
6. Create a test table with an encrypted column and insert some data. Using the ENCRYPT clause on its own is the same as using the ENCRYPT USING ‘AES192’ clause, as AES192 is the default encryption method.

For encrypting the data, we can simply use

SQL> ALTER TABLE EMP MODIFY (SAL ENCRYPT);
SQL> ALTER TABLE EMP MODIFY (SAL DECRYPT);

Details of the encrypted columns are stored in DBA_ENCRYPTED_COLUMNS

SQL> SELECT * FROM DBA_ENCRYPTED_COLUMNS;
SQL> SELECT * FROM user_encrypted_columns;

7. We can explicitly close the wallet using
SQL> ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;

Voting Disk Corrupted

Scenario 1 – All votedisks are corrupted

– move voting disk to data diskgroup – voting disk is automaticaly recovered using latest available backup of OCR.

RECOVER VOTING DISK

– To move voting disk to votedg diskgroup, ASM instance should be up and for ASM instance to be up, CRS should be up. Hence we will

– stop crs on all the nodes

– start crs in exclusive mode on one of the nodes (host01)

– start asm instance on host01 using pfile (since spfile of ASM instance is on ASM)

     – create a new diskgroup votedg

 – move voting disk to votedg  diskgroup

– stop crs on host01(was running in exclusive mode)

– restart crs on host01

– start crs on rest of the nodes

– start cluster on all the nodes

Scenario 2 – Few votedisks are corrupted

– move voting disk to data diskgroup.  voting disk will be automatically recovered using surviving copy of voting disk.

OCR Corrupted or Not Available?

——————————————-

This step is necessary only i It is not possible to directly restore a manual or automatic OCR backup if the OCR is located in an ASM disk group. This is caused by the fact that the command ‘ocrconfig -restore’ requires ASM to be up & running in order to restore an OCR backup to an ASM disk group. However, for ASM to be available, the CRS stack must have been successfully started. For the restore to succeed, the OCR also must not be in use (r/w), i.e. no CRS daemon must be running while the OCR is being restored.If OCR is also corrupted or otherwise unavailable, such as if OCR is on Oracle ASM and the disk group is no longer available.

When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated:

  • the Oracle Cluster Registry file (OCR)
  • the Voting file(s)
  • the shared SPFILE for the ASM instances

 

If you store OCR on an Oracle ASM disk group and the disk group is not available, then you must recover and mount the Oracle ASM disk group.

 

1. Locate the latest automatic OCR backup

2. Make sure the Grid Infrastructure is shutdown on all nodes

# $CRS_HOME/bin/crsctl stop crs -f

3. Start the CRS stack in exclusive mode

# $CRS_HOME/bin/crsctl start crs –excl -11.2.0.1

Please note:
This document assumes that the CRS diskgroup was completely lost, in which  case the CRS daemon (resource ora.crsd) will terminate again due to the inaccessibility of the OCR – even if above message indicates that the start succeeded.
If this is not the case – i.e. if the CRS diskgroup is still present (but corrupt or incorrect) the CRS daemon needs to be shutdown manually using:

# $CRS_HOME/bin/crsctl stop res ora.crsd -init
otherwise the subsequent OCR restore will fail.

11.2.0.2 and above:

# $CRS_HOME/bin/crsctl start crs -excl –nocrs

4. Label the CRS disk for ASMLIB use

# /usr/sbin/oracleasm createdisk ASMD40 /dev/sdh1

5. Create the CRS diskgroup via sqlplus

SQL> create diskgroup CRS external redundancy disk ‘ORCL:ASMD40’ attribute ‘COMPATIBLE.ASM’ = ‘11.2’;

6. Restore the latest OCR backup

Now that the CRS disk group is created & mounted the OCR can be restored – must be done as the root user:

# cd $CRS_HOME/cdata/rac_cluster1/
# $CRS_HOME/bin/ocrconfig -restore backup00.ocr

 

7. Start the CRS daemon on the current node (11.2.0.1 only !)

Now that the OCR has been restored the CRS daemon can be started, this is needed to recreate the Voting file. Skip this step for 11.2.0.2.0.

# $CRS_HOME/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start ‘ora.crsd’ on ‘racnode1’
CRS-2676: Start of ‘ora.crsd’ on ‘racnode1’ succeeded

 

8. Recreate the Voting file

The Voting file needs to be initialized in the CRS disk group:

# $CRS_HOME/bin/crsctl replace votedisk +CRS

9. Recreate the SPFILE for ASM (optional)

10. Shutdown CRS 

11. Rescan ASM disks

If using ASMLIB rescan all ASM disks on each node as the root user:

12. Start CRS 
As the root user submit the CRS startup on all cluster nodes:

# $CRS_HOME/bin/crsctl start crs

13. Verify CRS 

To verify that CRS is fully functional again:

# $CRS_HOME/bin/crsctl check cluster -all

The note applies to 10gR2 and 11gR1, for 11gR2, refer to note 1062983.1 and note 1377349.1

 

OCR corrupted? and you also lost Backup of OCR?

Metalink Note: 399482.1

Below procedure can be used for recovery.

1) Execute rootdelete.sh script from All Nodes.
2) Execute rootdeinstall.sh from Primary Node.
3) Run root.sh from Primary node.
4) Run root.sh from all remaining nodes.
5) Execute remaining configurations (ONS,netca,register required resources)

Why we collect stats?

 

Statistics are collected or gathered to improve the performance of a query? When select is being requested Oracle will do soft/hard parsing and CBO will choose the best execution plan to execute the query. Collecting stats help CBO to choose better execution plan?  Stats are kept in Data Dictionary.

 

The CBO approach uses statistics to calculate the selectivity of predicates and to estimate the cost of each execution plan. Selectivity is the fraction of rows in a table that the SQL statement’s predicate chooses.

The optimizer uses the selectivity of a predicate to estimate the cost of a particular access method and to determine the optimal join order.

 

Statistics quantify the data distribution and storage characteristics of tables, columns, indexes, and partitions. The optimizer uses these statistics to estimate how much I/O and memory are required to execute a SQL statement using a particular execution plan.

The statistics are stored in the data dictionary, and they can be exported from one database and imported into another.

 

Chained rows, average free space, or number of unused data blocks information is not required by CBO.

CBO only requires information of what are blocks currently being used and more clearly, what is the last data block for an object, since its only job is identify a better execution plan to read/write/delete/update data into those used blocks only.

 

You must gather statistics on a regular basis to provide the optimizer with information about schema objects.

New statistics should be gathered after a schema object’s data or structure are modified in ways that make the previous statistics inaccurate.

 

For example,

INSERT/DELETE – After loading a significant number of rows into a table, you should collect new statistics on the NUMBER OF ROWS.

UPDATE– After updating data in a table, you DO NOT need to collect NEW STATISTICS on the number of rows BUT YOU MIGHT NEED NEW STATISTICS ON THE AVERAGE ROW LENGTH.

 

The statistics generated include the following:

 

Table statistics –

Number of rows

Number of blocks

Number of empty blocks

Average row length

 

Column statistics –

Number of distinct values (NDV) in column

Number of nulls in column

Data distribution (histogram)

 

Index statistics –

Number of leaf blocks

Levels

Clustering factor

 

Difference between ANALYSE and DBMS_STATS?

  1. ANALYZE always runs SERIALLY, DBMS_STATS can do serially or in PARALLELY.
  2. ANALYZE can gather additional information that is not used by the optimizer, such as info about chained rows and structural integrity of indexes, tables, and clusters. DBMS_STATS does not gather this info.
  3. ANALYZE cannot overwrite or delete some of the values of statistics that were gathered by DBMS_STATS.
  4. ANALYZE is used to list chained rows, to validate structure whereas DBMS_STATS to gather statistics.
  5. ANALYZE cannot gather statistics on the virtual column whereas DBMS_STATS can.
  6. ANALYZE can gather cluster statistics whereas DBMS_STATS do not.

 

 

 

When you create an index, does the stats collected automatically?

 

Performance Tuning

 

AWR

ADDM

ASH

Statspack

TKPROF

EXPLAIN PLAN

AUTO TRACE

PLAN TABLE

SQL TRACE

V$SQL

Hash Value

Sql id

 

Compute/gather/collect statistics

CBO/RBO

 

http://www.akadia.com/services/ora_chained_rows.html

http://www.stechno.net/sap-notes.html?view=sapnote&id=1269911

 

Reclaiming Unused Space

http://docs.oracle.com/cd/E11882_01/server.112/e10897/storage.htm#ADMQS0625

http://docs.oracle.com/cd/E11882_01/server.112/e25494/schema.htm#ADMIN11608

 

************ Physical IO on your system was really really slow.  do you have a hot disk, slow disks, or a really busy busy system.

************ How to figure out the

 

 

When a block is requested by a query, Oracle looks

for it in the Buffer Cache and if found, it results in a Logical read and if it does not

find the Block in there it results in a physical read (disk I/O).

 

Temporary tablespaces are used to manage space for database sort and joining operations and for storing global temporary tables.

For joining two large tables or sorting a bigger result set, Oracle cannot do in memory by using SORT_AREA_SIZE in PGA (Programmable Global Area).

Space will be allocated in a temporary tablespace for doing these types of operations.

Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, SELECT DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.
Note that a temporary tablespace cannot contain permanent objects and therefore doesn’t need to be backed up. A temporary tablespace contains schema objects only for the duration of a session.

Tempfiles (Temporary Datafiles)

Unlike normal datafiles, tempfiles are not fully allocated. When you create a tempfiles, Oracle only writes to the header and last block of the file. This is why it is much quicker to create a tempfiles than to create a normal datafile.

Follow

Get every new post delivered to your Inbox.