Showing posts with label Interview Questions. Show all posts
Showing posts with label Interview Questions. Show all posts

interview questions : core database

!! oracle core database interview questions :!!






68. What is Cursor ?
-- A Cursor is a handle ( a name or pointer) for the memory associated with a
-- specific statement

21. What is an Index ?
-- An Index is an optional structure associated with a table to have direct access
-- to rows,which can be created to increase the performance of data retrieval. Index
-- can be created on one or more columns of a table.

22. How are Indexes Update ?
/*
    Indexes are automatically maintained and used by ORACLE. Changes to table data are
    automatically incorporated into all relevant indexes.
*/

23. What are Clusters ?
/*
    Clusters are groups of one or more tables physically stores together to share common
    columns and are often used together.
*/

24. What is cluster Key ?
/*
    The related columns of the tables in a cluster is called the Cluster Key.
*/

25. What is Index Cluster ?
/*
    A Cluster with an index on the Cluster Key.
*/

26. What is Hash Cluster ?
/*
    A row is stored in a hash cluster based on the result of applying a hash function
    to the rows cluster key value. All rows with the same hash key value are stores
    together on disk.
*/

27. When can Hash Cluster used ?
/*
    Hash clusters are better choice when a table is often queried with equality queries.
    For such queries the specified cluster key value is hashed. The resulting hash key
    value points directly to the area on disk that stores the specified rows.
*/

111. What is a Procedure ?
/*
    A Procedure consist of a set of SQL and PL/SQL statements that are grouped together
    as a unit to solve a specific problem or perform a set of related tasks.
*/

112. What is difference between Procedures and Functions ?
/*
    A Function returns a value to the caller where as a Procedure does not.
*/

113. What is a Package ?
/*
    A Package is a collection of related procedures, functions, variables and other
    package constructs together as a unit in the database.
*/

114. What are the advantages of having a Package ?
/*
    Increased functionality (for example,global package variables can be declared
    and used by any proecdure in the package) and performance (for example all objects
    of the package are parsed compiled, and loaded into memory once)
*/

115. What is Database Trigger ?
/*   
    A Database Trigger is procedure (set of SQL and PL/SQL statements) that is
    automatically executed as a result of an insert in,update to, or delete from a
    table.
*/

116. What are the uses of Database Trigger ?
/*
    Database triggers can be used to automatic data generation, audit data modifications,
    enforce complex Integrity constraints, and customize complex security authorizations.
*/

117. What are the differences between Database Trigger and Integrity constraints ?

/*  A declarative integrity constraint is a statement about the database that is always true.
    A constraint applies to existing data in the table and any statement that manipulates the table.

    A trigger does not apply to data loaded before the definition of the trigger,
    therefore, it does not guarantee all data in a table conforms to the rules
    established by an associated trigger.

    A trigger can be used to enforce transitional constraints where as a declarative
    integrity constraint cannot be used.
*/

28. What is Database Link ?
--  A database link is a named object that describes a "path" from one database to
--  another.

29. What are the types of Database Links ?
--  Private Database Link, Public Database Link & Network Database Link.

30. What is Private Database Link ?
--  Private database link is created on behalf of a specific user. A private database
--  link can be used only when the owner of the link specifies a global object name
--  in a SQL statement or in the definition of the owners views or procedures.

31. What is Public Database Link ?
--  Public database link is created for the special user group PUBLIC. A public database
--  link can be used when any user in the associated database specifies a global object
--  name in a SQL statement or object definition.

32. What is Network Database Link ?
--  Network database link is created and managed by a network domain service. A network
--  database link can be used when any user of any database in the network specifies a
--  global object name in a SQL statement or object definition.

33. What is Data Block ?
--  ORACLE databases data is stored in data blocks. One data block corresponds to a
--  specific number of bytes of physical database space on disk.

34. How to define Data Block size ?
--  A data block size is specified for each ORACLE database when the database is
--  created. A database users and allocated free database space in ORACLE datablocks.
--  Block size is specified in INIT.ORA file and cannt be changed latter.

35. What is Row Chaining ?
--  In Circumstances, all of the data for a row in a table may not be able to fit in
--  the same data block. When this occurs , the data for the row is stored in a chain
--  of data block (one or more) reserved for that segment.

36. What is an Extent ?
--  An Extent is a specific number of contiguous data blocks, obtained in a single
--  allocation, used to store a specific type of information.

37. What is a Segment ?
--  A segment is a set of extents allocated for a certain logical structure.

38. What are the different type of Segments ?
--  Data Segment, Index Segment, Rollback Segment and Temporary Segment.

39. What is a Data Segment ?
--  Each Non-clustered table has a data segment. All of the tables data is stored
--  in the extents of its data segment. Each cluster has a data segment. The data
--  of every table in the cluster is stored in the clusters data segment.

40. What is an Index Segment ?
--  Each Index has an Index segment that stores all of its data.

41. What is Rollback Segment ?
--  A Database contains one or more Rollback Segments to temporarily store "undo"
--  information.

42. What are the uses of Rollback Segment ?
--    Rollback Segments are used :To generate read-consistent database information
--    during database recovery to rollback uncommitted transactions for users.
1) How do you switch from an init.ora file to a spfile?
-> Issue the create spfile from pfile command.


2) Explain the difference between a data block, an extent and a segment.
-> A data block is the smallest unit of logical storage for a database object.
As objects grow they take chunks of additional storage that are composed of
contiguous data blocks. These groupings of contiguous data blocks are called extents.
All the extents that an object takes when grouped together are considered the segment
of the database object.


3) Where would you look for errors from the database engine?
-> In the alert log.


4) Compare and contrast TRUNCATE and DELETE for a table.
-> Both the truncate and delete command have the desired outcome of getting rid of all the
rows in a table. The difference between the two is that the truncate command is a DDL operation
and just moves the high water mark and produces a now rollback. The delete command, on the other
hand, is a DML operation, which will produce a rollback and thus take longer to complete.


5) Give the reasoning behind using an index. ?
-> Faster access to data blocks in a table.


6) What type of index should you use on a fact table?
-> A Bitmap index.


7) Give two examples of referential integrity constraints.
-> A primary key and a foreign key.

8) Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode.?
-> ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all transactions
that have occurred in the database so that you can recover to any point in time.

NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage
of not being able to recover to any point in time.
NOARCHIVELOG mode does have the advantage of not having to write transactions to an
archive log and thus increases the performance of the database slightly.


9) What command would you use to create a backup control file?
-> Alter database backup control file to trace.


10) Give the stages of instance startup to a usable state where normal users may access it. ?
->
STARTUP NOMOUNT - Instance startup

STARTUP MOUNT - The database is mounted

STARTUP OPEN - The database is opened


11) Explain the difference between $ORACLE_HOME and $ORACLE_BASE.?
-> ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath
ORACLE_BASE where the oracle products reside.



12) Explain the difference between Pfile and Spfile ?
-> Pfile is Text file, where as SPfile is binary file which is created from pfile
before database is startup.


13) Explain logical and physical part of Oracle database in short
-> logical is tablespace and physical is datafile.


14) What view(s) do you use to associate a user’s SQLPLUS session with his o/s process?
-> v$session and v$process


15) What is an Integrity Constrains ?  
-> An integrity constraint is a declarative way to define a business rule for a column of a table. 


16) What is SYSTEM tablespace and when is it Created? Subscribe 
-> Every ORACLE database contains a tablespace named SYSTEM, which is automatically
created when the database is created. The SYSTEM tablespace always contains the
data dictionary tables for the entire database.


17) How are Indexes Update ? 
-> Indexes are automatically maintained and used by ORACLE. Changes to table
data are automatically incorporated into all relevant indexes. 


18) What is Data Block ? Subscribe  ?
-> ORACLE databases data is stored in data blocks. One data block corresponds to a specific
number of bytes of physical database space on disk


19)RE: what is the back ground process in oracle architec.?
-> the mandatory background process for 9i is 5.
   they are SMON,PMON,CKPT,DBWR,LGWR.
  
 The Background process related to Oracle Architecture are as follows:

    1.DBWR(DATABASE WRITER)
    2.LGWR(LOG WRITER)
    3.SMON(SYSTEM MONITOR)
    4.PMON(PROCESS MONITOR)
    5.CKPT(CHECK POINT)
    6.SERVER
    7.CJQ(CO-ORDINATOR JOB QUEUE PROCESSES)
    8.RECOVER
    9.DISPATCHER
    10.ARCHIVER 

Including some more from Oracle 11G
    1) VKTM (Virtual Keeper of TiMe keeps the "SGA time" variable current)
    2) GEN 0 (General Task Execution Process)
    3)









*. What is Oltp database?
        OLTP means Online Transaction Processing. OLAP means Online Analytical Processing. OLTP deals with processing of data from transactional systems. For example, an application that loads the reservation data of a hotel is an OLTP system. An OLTP system is designed mainly keeping in the mind the performance of the end application. It comprises of the application, database & the reporting system that directly works on this database. The database in an OLTP system would be designed in a manner as to facilitate the improvement in the application efficiency thereby reducing the processing time of the application.

*. What is Olap database?
        OLAP systems were mainly developed using data in a warehouse. Having said that a need was felt to isolate older data, it was necessary to store them in a format that would be useful in easing out the reporting bottlenecks. A need was felt to isolate the data & redesign the application data to such a format & structure that this data repository would be the prime source of business decisions. Coming back to OLAP systems, these systems were mainly developed on the isolated data.
*. What is the function of Optimizer?
        The goal of the optimizer is to choose the most efficient way to execute a SQL statement.
   
*. What is Execution Plan?
        The combination of the steps the optimizer chooses to execute a statement is called an execution plan.
   
*. What are the different approaches used by Optimizer in choosing an execution plan?
        Rule-based and Cost-based.
   
*. What is dictionary cache?
        The dictionary cache stores “metadata” (data about your tables and indexes) and it’s also known as the row cache. It is used to cache data dictionary related information in RAM for quick access. The dictionary cache is like the buffer cache, except it’s for Oracle data dictionary information instead of user information.
   
*.     What is a Data Dictionary?
    The Oracle data dictionary is one of the most important components of the Oracle DBMS.It contains all information about the structures and objects of the database such as tables,columns, users, data files etc. The data stored in the data dictionary are also often called metadata.
   
*.     What is Database Buffers?
        Database buffers are cache in the SGA used to hold the data blocks that are read from the data    segments in the database such as tables, indexes and clusters DB_BLOCK_BUFFERS parameter in INIT.ORA decides the size
       
*. What is the functionality of SYSTEM table space?
        System tablespace is a main part of oracle database. All the database information is in it.it is created when database is created.It is managed by oracle server . dba cannot change its contents. it contains the data dictionary objects .

*.  What is the function of checkpoint (CKPT)?
        Checkpoint is a background process which ensures dbwn process has written data to datafiles and upadates control file and datafile header to establish data consistency.The CKPT is also useful to get the point in time from where to begin the recovery in case of failure.
*. When ckpt occurs?  
     1. For every 3 seconds
     2. When 1/3rd of DB buffer fills
     3. When log swtich occurs
     4. When database shuts down
   
*. When does LGWR write to the database?
        Log Writer (LGWR) writes redo log entries. it is generated in the redo log buffer of the SGA to on-line Redo Log File.LGWR writes  redo  log  entries  into an  on-line  redo  log  file  when transactions commit and the log buffer files are full.
   
*.      What is Shared SQL Area ?
        A shared SQL area contains the parse tree and execution plan for a given SQL statement. Oracle saves memory by using one shared SQL area for SQL statements run multiple times, which often happens when many users run the same application.Oracle allocates memory from the shared pool when a new SQL statement is parsed, to store in the shared SQL area. The size of this memory depends on the complexity of the statement. If the entire shared pool has already been allocated, Oraclecan deallocate items from the pool using a modified LRU (least recently used) algorithm until there is enough free space for the new statement's shared SQL area.'
       
*. What Does DBWR do?
        Database writer writes modified blocks from the database buffer cache to the data files.
       
*. What is server processes?
        A server process is one that handles user requests. When you type in a SQL statement, the server process handles the parsing and running of that SQL statement,

*.    Which background process performs Crash recovery? 
        SMON (system monitor)
       
*.    What is On-line Redo Log?
        The On-line Redo Log is a set of tow or more on-line redo files that record all committed changes made to the database. Whenever a transaction is committed, the  corresponding redo entries temporarily stores in redo log buffers of the SGA are   written to an on-line redo log file by the background process LGWR. The on-line redo log files are used in cyclical fashion.

*.  What are the steps involved in Instance Recovery?
        Rolling forward to recover data that has not been recorded in data files yet has      been recorded in the on-line redo log, including the contents of rollback segments. Rolling back transactions that have been explicitly rolled back or have  not been committed as indicated by the rollback segments regenerated in step a. Releasing any resources (locks) held      by transactions in process at the time of the failure. Resolving any Pending distributed transactions undergoing a two-phase commit at the time of the instance failure.
*.    What does COMMIT do?
        COMMIT makes permanent the changes resulting from all SQL statements in the transaction. The changes made by  the SQL statements of a transaction become visible to other user sessions transactions that start only after transaction  is committed.

*.  What are the steps involved in Database Startup?
    Start an instance, Mount the Database and Open the Database.

*. What are the steps involved in Database Shutdown?
    Close the Database; Dismount the Database and Shutdown the Instance.

*. What is Restricted Mode of Instance Startup?
    An instance can be started in restricted mode so that when the database is open connections are limited only to those whose user accounts have been granted the RESTRICTED SESSION system privilege.

*. What mode the instance should be to create the database?
    No mount

*. While creating database can you specify the size of control files?
      No

*. Which parameter determines the size of SHARED POOL?
    SHARED_POOL_SIZE.
   
*. What is a trace file and how is it created?
    Each server and background process can write an associated trace file. When an internal error is detected by a process or user process, it dumps information about the error to its trace. This can be used for tuning the database.

*.     What is a Tablespace?
        A database is divided into Logical Storage Unit called tablespaces.A tablespace is used to grouped related logical structures together
       
*.      What are the Characteristics of Data Files?
        A data file can be associated with only one Tablespaces. One or more data files can be use as data of database storage called a tablespace.
       
*.     How do you drop a tablespace, if it has database objects?
    Drop tablespace tablespacename including contents
   
*.  How to rename a datafile?
    1. Alter tablespace <TBSNAME> offline;
    2. Alter tablepsace <TBSNAME> rename datafile 'oldname' to 'newname';
    3. Alter tablespace <TBSNAME> online;
   
*.    What is the procedure for Transportable tablespace migration?
    Transportable Tablespaces (TTS) allows you to copy a set of datafiles for a tablespace on one database and plug that tablespace into a different database.As we noted, you cannot transport a single partition of a table without transporting the entire table. Therefore, we will need to exchange the partition with a stand-alone table temporarily, so that the partition becomes its own table.
   
*.    How to rename the tablespace?
    Sys>alter tablespace <tablesapce name> rename to <tablespace name>
   
*.     Tell me about ORA 1555 and how do you address if you get this error?
    It usually occurs after queries or batch processes have been running for a long time, which means you can lose many hours of processing when the error crops up.There are three situations that can cause the ORA-01555 error:
    A. An active database with an insufficient number of small-sized rollback segments
    B.A rollback segment corruption that prevents a consistent read requested by the query
    C.A fetch across commits while your cursor is open

*.     What is the view to see UNDO for how much data flushed out and how much data it having now….how many active blocks,         expired     blocks?
    V$undostat
   
*. What are the init parameters you have to set to make use of undo management?
    undo_tablespace= undotbs1
    undo_management= auto
    undo_retention= time in minutes
    comment rollback_segment
   
*. What is flashback query and flash back recovery?
        Flashback query, a new feature of Oracle 9i. Flashback query enables us to query our data as it existed in a previous state. In other words, we can query our data from a point in time before we or any other users made permanent changes to it.  Flashback recovery can bring the complete database to the previous state based on SCN number, on timestamp, on     restore point.
   
* . How to flush recycle bin?
        We use “Purge” command to Flush Recycle bin. It will automatically remove old data from recycle bin if tablespace needs     some more space. If you want to purge just one single table then you type "Purge table <tableName>"
   
*.  What background process refreshes materialized views?
        Cjqo (co-ordinate job queue)
       
*.     What is netstat?
        And it’s Usage? It is a utility to know the port numbers availability.
        Usage: netstat -na grep port number
   
*.     Which operations you can perform using database links?
    Options :-> DDL or DML
        DML
   
*.    Which operations are not allowed on Materialized views?
        DML
   
*. Can we manually refresh any materialized view? Using which package
        Yes, DBMS_MVIEW
   
*. Where the Materialized view stored?
        In database (client side db).this is not a view.it’s read only table.

*. What is snapshot?
        Snapshot is an object used to dynamically replicate data between distribute database at specified time intervals.

*. What are the various types of snapshots?There are three types of materialized views:
        Read only materialized view, Updateable materialized view, Writeable materialized view
       
*. What are the options available to refresh snapshots?
    COMPLETE -:     Tables are completely regenerated using the snapshots query and the master tables every time the snapshot referenced.
    FAST -:     If simple snapshot used then a snapshot log can be used to send the changes to the snapshot tables.
    FORCE -: Default value. If possible it performs a FAST refresh; otherwise it will perform a complete refresh.
   
*.  What is function of RECO?
        Recover (RECO) is used to resolve distributed transactions that are pending due to a network  or  system failure in a distributed database. At timed intervals,the  local  RECO  attempts  to  connect  to  remote databases and automatically  complete  the commit or rollback of the local portion of any pending distributed transactions.
   
*.  What is Archived Redo Log?
        Archived Redo Log consists of Redo Log files that have archived before being reused.

*.  What is Mirrored on-line Redo Log?
    A mirrored on-line redo log consists of copies of on-line redo log files physically located on separate disks; changes made to one member of the group are made to all members.
   
*.  What does a Control file Contain?
        A Control file records the physical structure of the database. It contains the following information. Database Name and locations of a database's files and redolog files. Time stamp of database creation'.
       
*.     What happens when archive log destination becomes 100% full when the database is running in ARCHIVELOG mode?
        How do you recover? The database gets shutdown. We should move old archives to different location and startup the database.
       
*.  How many maximum control files you can create in Oracle database?
        What is the error number you get if u tries to create more than 8?
        8, ora 208      
   
*.     If you want to maintain one more archive destination which parameter you have to set?
            It’s a dynamic parameter you have to set log_archive_dest_1=
           
*. How to create a trace file?
    "Alter database backup controlfile to trace;"
   
*. Where do you get all hidden parameters?
    In the table x$ksppi
    Sys>select ksppinm,ksppdesc from x$ksppi;
   
*.  How do you know whether the parameter is dynamic or static?
    Check the value from the column ISINSTANCE_MODIFIABLE from V$PARAMETER.
   
*. Which role you grant to rman user while configuring rman user  
        Recovery_catalog_owner role.   
       
*. Temporarily recovery catalog database (for RMAN) is down. Can you still run a backup? How?
        Yes, using no catalog mode
       
*.     You have run a backup of database using RMAN nocatalog. How do you sync the recovery catalog with the metadata about the         backup that was taken?
    Resync catalog
   
*.      With our catalog can we connect 2 target databases at a time?
        Yes …we can…
       
*.     What is difference between OBSOLETE and EXPIRED?
        Obsolete backups mean inconsistency backups for recovery purpose. Expired backups means those backups physical files are not available on disk.
       
*.     How  speed up backup jobs in Rman?
    By increase number of channels.
   
*.   
   
   
   
   
   
USER MANAGENMENTS :



*.    What is the difference between a user and schema?
        A user will not have any of his own objects and will always access others objects. Schema is a collection of objects

*.    After creating user, what are the privileges you would assign?
    We will give CONNECT role in 10g (CREATE SESSION if 9i) and any other roles or privileges specified by the application support

*.    What is the disadvantage of resource role?
    It will override the quota granted for a user and makes it unlimited

*.    How you will identify the privileges and roles assigned to a user?
    Using following views
    dba_sys_privs
    dba_tab_privs
    dba_role_privs
    role_sys_privs
    role_tab_privs
    role_role_privs

*.    How to create user and grant the permission in a single command?

    SQL> grant create session to user_a identified by <pwd>;

*.    What happens to the objects if we change the default tablespace for the user?
    Nothing. They will continue to work as normal

*.    You got a requirement to run a script as user “MARK” and you don’t have the password of MARK. How you will take next step?
    We will ask application support. In case they cannot be reached, we can take ASCII password stored in dba_users and change the password. Once work is done we can change the password back using ASCII that we stored earlier

    Also we can set current_user option in SQLPLUS

*.    What is the view to check default permanent and temp tablespace values?
    database_properties

*.    Have you implemented password policies?
    a. Yes. Using profile management

*.    What are SNIPPED sessions? If there any impact on database to have them?

    Those are sessions terminated from database because of crossing IDLE_TIME limit. But disadvantage is even though oracle level sessions are cleared, at OS level processes will still occupy resources which is a burden to the server

*.    I am using IDLE_TIME as 15 min, but observed that even after 20min idle time, session is not getting disconnected, what is the reason?
    RESOURCE_LIMIT parameter is not set to TRUE

*.    As a DBA will you recommend to implement auditing in the database?
    Depends on the type of database, but it would be wise to implement atleast on some critical tables

*.    What factors you will choose before you enable auditing?
    We need to check about database performance and what level of auditing is required

*.    If auditing is not in place, how you will get to know if a user performed update or not yesterday?
    We can use logminer to read the archives which are generated yesterday

*.    Are you maintaining any SOX compliance database?
    No. We don’t have any SOX database as of now

*.    How to check which users are granted with sysdba role?
    We can check from v$syspw_file view

*.    If we want to restrict users to connect to the database, but want to allow some users to connect, how can we do that?
    We need to grant restrcited session privilege for the users whom we want to connect
   

   
   
   

interview questions : backup/restore

!! oracle backup/restore interview questions : !!



*.    What is the difference between cold and hot backup?
        a.COLD backup will be taken by shutting down the database, where as HOT backup will be taken while database is up and running
*.    What happens when a database or tablespace is placed in begin backup mode?
        Datafile header will get freezed i.e CKPT will not update latest SCN
        DBWR still will write data into datafiles
        When end backup, CKPT will update the latest SCN to datafile header
*.    Why more redo will generate during hot backup?
        It is to avoid fractured block as oracle will copy entire block as redo entry
*.    What is fractured block?
        A block which might contain inconsistent data. This happens because the speed of DBWR is different than OS copy during hot backup
*.    What is the difference between complete and incomplete recovery?
        No data loss in complete recovery whereas some data loss is observed in incomplete recovery
*.    What will happen if we use resetlogs?
        It will create new redolog files (if not already there)
        It will reset log sequence number to 1,2…etc
*.    Out of 100 datafiles, I lost 29 files. How you will identify which files to recover?
        a.Using the view v$recovery_file
*.    How to check if incomplete recovery is performed in the database?
        SQL> select RESETLOGS_TIME,RESETLOGS_CHANGE from v$database;
*.    I placed a tablespace in hot backup mode and datafile which is being backup is  lost. How you will recover it?
        We can restore from old backup and apply all the archives till now
        Yesterday night backup is successful. Today morning we added a datafile at 11  AM. After noon 3’o clock the new
*.    added datafile is lost. Can I recover that  datafile? If so, how?
        Yes we can recover it. We need to create that datafile using “alter database create datafile ‘path’;” command and then apply all the archives
*.        What is the importance of archives during recovery process?
        As they will store all the changes happened in the database, always we can do complete recovery if we have a perfect backup
*.    How you will recover database when all copies of control file are lost?
        We can do a complete recovery if we have a latest trace of it. Or else, we need to do incomplete recovery by restoring controlfile from last successful backup
*.    Application team informed that an important table is dropped. How you will  recover it?
        We can recover it by doing until time recovery. But this will affect other user transactions, so we need to get approval for this first
*.    What are the pre-requisite factors you will consider before performing until time  recovery?
        We need to see if other users are not getting any affect by doing this
*.    You need to restore the database and then realized there are no control files.  How you will proceed?
        Either we can resotre controlfiles first and do recovery (but a data loss is there as its incomplete recovery) or we can create new controlfile if we have a latest trace
       
       
*.    What is difference between Restoring and Recovery of database?
        Restoring means copying the database object from the backup media to the destination where actually it is required where as recovery means to apply the database object copied earlier (roll forward) in order to bring the database into consistent state.
   
*.    What is the benefit of running the DB in archivelog mode over no archivelog mode?
        When a database is in no archivelog mode whenever log switch happens there will be a loss of some redoes log information in order to avoid this, redo logs must be archived. This can be achieved by configuring the database in archivelog mode.
   
*.    If an oracle database is crashed? How would you recover that transaction which is not in backup?If the database is in         archivelog we can recover that transaction otherwise we cannot recover that transaction which is not in backup.
   
   
*.    Incremental backup levels:
        Level 0 – full backup that can be used for subsequent incrementals
        RMAN> backup incremental level 0 database;
        Differential Level 1–only the blocks that have changed since the last backup (whether it is level 0 or level 1)
        RMAN> backup incremental level 1 differential database;
        Cumulative Level 1 – all changes since the last level 0 incremental backup
        RMAN> backup incremental level 1 cumulative database;
        A full backup cannot be used for a cumulative level 1 backup.
        A cumulative level 1 backup must be done on top of an incremental level 0 backup.
   
*.    Why RMAN incremental backup fails even though full backup exists?If you have taken the RMAN full backup using the command         ‘Backup database’, where as a level 0 backup is physically identical to a full backup. The only difference is that the         level 0 backup is recorded as an incremental backup in the RMAN repository so it can be used as the parent for a level 1         backup. Simply the ‘full backup without level 0’ can not be considered as a parent backup from which you can take         level 1 backup.   
   
   
   
   
   
   
commands :


RMAN> CONNECT CATALOG rman/pwd@catdb;
RMAN> CONNECT AUXILIARY rman/pwd@auxdb;
RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME ALL;
RMAN> IMPORT CATALOG rman/rman@catdb1 DB_NAME=prod1 NO UNREGISTER;
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR SBT TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR SBT TO 1; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE; # default -- Oracle 11g R2
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '.../dbs/snapcf_sid.f'; # default


%F = dbid, day, month, year and sequence
%U = %u_%p_%c
%u = eight characters of the backup set and time ...
%p = piece number within the backupset
%c = copy number of the backup piece ...

RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt PARMS='ENV=mml_env_settings';
RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'SYS/pwd@node2' PARMS 'ENV=(NSR_SERVER=bksrv2)';
RMAN> SET DBID=4240978820;
RMAN> SET COMMAND ID TO 'rman';



BACKUP FULL Options
BACKUP FULL AS (COPY | BACKUPSET) Options
BACKUP INCREMENTAL LEVEL [=] integer Options
BACKUP INCREMENTAL LEVEL [=] integer AS (COPY | BACKUPSET) Options
BACKUP AS (COPY | BACKUPSET) Options
BACKUP AS (COPY | BACKUPSET) (FULL | INCREMENTAL LEVEL [=] integer) Options


BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
BACKUP VALIDATE DATABASE;
BACKUP VALIDATE CHECK LOGICAL DATABASE;
BACKUP AS COPY TABLESPACE test;
 LIST INCARNATION OF DATABASE;

 listBackupOption::=
[[BY BACKUP] [VERBOSE] | SUMMARY | BY {BACKUP SUMMARY|FILE}]


RMAN> REPORT OBSOLETE;
RMAN> REPORT NEED BACKUP;
RMAN> REPORT NEED BACKUP DAYS=5;
RMAN> REPORT NEED BACKUP REDUNDANCY=3;
RMAN> REPORT NEED BACKUP RECOVERY WINDOW OF 7 DAYS;
RMAN> REPORT NEED BACKUP DATABASE;
RMAN> REPORT NEED BACKUP INCREMENTAL 1;
RMAN> REPORT UNRECOVERABLE;
RMAN> REPORT SCHEMA;
RMAN> REPORT SCHEMA AT TIME 'sysdate-20/1440'

CROSSCHECK BACKUP OF DATABASE;
 CROSSCHECK BACKUP OF TABLESPACE userd COMPLETED BEFORE 'SYSDATE-14';

RESTORE SPFILE;
RESTORE ARCHIVELOG ALL VALIDATE;
RESTORE ARCHIVELOG ALL PREVIEW;


 DUPLICATE TARGET DATABASE TO dwhdb;

 SWITCH DATABASE TO COPY;



   
   

interview questions : Goldengate

!! oracle Goldengate interview questions :!!


-- What type of Topology does Goldengate support?
GoldenGate supports the following topologies. More details can be found here.

Unidirectional
Bidirectional
Peer-to-peer
Broadcast
Consolidation
Cascasding

-- What are the main components of the Goldengate replication?
The replication configuration consists of the following processes.

Manager
Extract
Pump
Replicate

-- What database does GoldenGate support for replication?
Oracle Database
TimesTen
MySQL
IBM DB2
Microsoft SQL Server
Informix
Teradata
Sybase
Enscribe
SQL/MX

-- What transaction types does Goldengate support for Replication?
Goldengate supports both DML and DDL Replication from the source to target.

-- What are the supplemental logging pre-requisites?
The following supplemental logging is required.

Database supplemental logging
Object level logging

-- Why is Supplemental logging required for Replication?
When a transaction is committed on the source database, only new data is written to the Redo log. However for Oracle to apply these transactions on the destination database, the before image key values are required to identify the effected rows. This data is also placed in the trail file and used to identify the rows on the destination, using the key value the transactions are executed against them.

-- List important considerations for bi-directional replication?
The customer should consider the following points in an active-active replication environment.

Primary Key: Helps to identify conflicts and Resolve them.

Sequences: Are not supported. The work around is use to use odd/even, range or concatenate sequences.

Triggers: These should be disabled or suppressed to avoid using uniqueness issue
Data Looping: This can easy avoided using OGG itself

LAG: This should be minimized. If a customer says that there will not be any LAG due to network or huge load, then we don’t need to deploy CRDs.
But this is not the case always as there would be some LAG and these can cause Conflicts.

CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kind of DMLs that can be used to detect and resolve them.

Packaged Application: These are not supported as it may contain data types which are not support by OGG or it might not allow the application
modification to work with OGG.

-- Are OGG binaries supported on ASM Cluster File System (ACFS)?
Yes, you can install and configure OGG on ACFS.

-- Are OGG binaries supported on the Database File System (DBFS)? What files can be stored in DBFS?
No, OGG binaries are not supported on DBFS. You can however store parameter files, data files (trail files), and checkpoint files on DBFS.

-- What is the default location of the GLOBALS file?
A GLOBALS file is located under Oracle GoldenGate installation directory (OGG HOME)

-- Where can filtering of data for a column be configured?
Filtering of the columns of a table can be set at the Extract, Pump or Replicat level.

-- Is it a requirement to configure a PUMP extract process in OGG replication?
A PUMP extract is an option, but it is highly recommended to use this to safe guard against network failures. Normally it is configured
when you are setting up OGG replication across the network.

-- What are the differences between the Classic and integrated Capture?
Classic Capture:

The Classic Capture mode is the traditional Extract process that accesses the database redo logs (optionally archive logs) to capture the DML
changes occurring on the objects specified in the parameter files.

At the OS level, the GoldenGate user must be a part of the same database group which owns the database redo logs.
This capture mode is available for other RDBMS as well.
There are some data types that are not supported in Classic Capture mode.
Classic capture can’t read data from the compressed tables/tablespaces.
Integrated Capture (IC):

In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of
logical change records (LCRs).
IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
This feature is only available for oracle databases in Version 11.2.0.3 or higher.
 It also supports various object types which were previously not supported by Classic Capture.
This Capture mode supports extracting data from source databases using compression.
Integrated Capture can be configured in an online or downstream mode.

-- List the minimum parameters that can be used to create the extract process?
The following are the minimium required parameters which must be defined in the extract parameter file.

EXTRACT NAME
USERID
EXTTRAIL
TABLE

-- What are macros?
Macro is an easier way to build your parameter file. Once a macro is written it can be called from different parameter files.
Common parameters like username/password and other parameters can be included in these macros. A macro can either be another parameter file or a library.

-- Where can macros be invoked?
The macros can be called from the following parameter files.

Manager
Extract
Replicat
Gobals

-- How is a macro defined?
A macro statement consists of the following.

Name of the Macro
Parameter list
Macro body
Sample:
MACRO #macro_name
PARAMS (#param1, #param2, …)
BEGIN
< macro_body >
END;

-- I want to configure multiple extracts to write to the same exttrail file? Is this possible?
Only one Extract process can write to one exttrail at a time. So you can’t configure multiple extracts to write to the same exttrail.

-- What type of Encryption is supported in Goldengate?
Oracle Goldengate provides 3 types of Encryption.

Data Encryption using Blow fish.
Password Encryption.
Network Encryption.

-- What are the different password encrytion options available with OGG?
You can encrypt a password in OGG using

Blowfish algorithm and
Advance Encryption Standard (AES) algorithm

-- What are the different encryption levels in AES?
You can encrypt the password/data using the AES in three different keys

a) 128 bit
b) 192 bit and
c) 256 bit

Is there a way to check the syntax of the commands in the parameter file without actually running the GoldenGate process
Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting. If there is any error you will see it.

-- How can you increase the maximum size of the read operation into the buffer that holds the results of the reads from the transaction log?
If you are using the Classical Extract you may use the TRANSLOGOPTION ASMBUFSIZE parameter to control the read size for ASM Databases.

-- What information can you expect when there us data in the discard file?
When data is discarded, the discard file can contain:
1. Discard row details
2. Database Errors
3. Trail file number

-- What command can be used to switch writing the trail data to a new trail file?
You can use the following command to write the trail data to a new trail file.
SEND EXTRACT ext_name, ROLLOVER

-- How can you determine if the parameters for a process was recently changed
When ever a process is started, the parameters in the .prm file for the process is written to the process REPORT. You can look at
the older process reports to view the parameters which were used to start up the process. By comparing the older and the current
reports you can identify the changes in the parameters.

Interview Questions: RAC


Interview Questions : ASM

!! Oracle ASM Interview Questions and Answers  !!

--------------------------------------------------------------------------
                            ORACLE ASM
--------------------------------------------------------------------------

*. What is ASM?

    Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager. ASM is based on the principle that the database should manage storage instead of requiring an administrator to do it. ASM eliminates the need for you to directly manage potentially thousands of Oracle database files.
   
    In Oracle Database 10g/11g there are two types of instances: database and ASM instances. The ASM instance, which is generally named +ASM, is started with the INSTANCE_TYPE=ASM init.ora parameter. This parameter, when set, signals the Oracle initialization routine to start an ASM instance and not a standard database instance. Unlike the standard database instance, the ASM instance contains no physical files; such as logfiles, controlfiles or datafiles, and only requires a few init.ora parameters for startup.
   
    Upon startup, an ASM instance will spawn all the basic background processes, plus some new ones that are specific to the operation of ASM. The STARTUP clauses for ASM instances are similar to those for database instances. For example, RESTRICT prevents database instances from connecting to this ASM instance. NOMOUNT starts up an ASM instance without mounting any disk group. MOUNT option simply mounts all defined diskgroups
   
    For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance number.

*. What are the key benefits of ASM?

    ASM provides filesystem and volume manager capabilities built into the Oracle database kernel. Withthis capability, ASM simplifies storage management tasks, such as creating/laying out databases and disk space management. Since ASM allows disk management to be done using familiar create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial decisions on provisioning.

The following are some key benefits of ASM:

    1.ASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize performance.
    2.ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database consolidation.
    3.Inherent large file support.
    4.Performs automatic online redistribution after the incremental addition or removal of storage  capacity.
    5.Maintains redundant copies of data to provide high availability, or leverages 3rd party RAID functionality.
    6.Supports Oracle Database as well as Oracle Real Application Clusters (RAC).
    7.Capable of leveraging 3rd party multipathing technologies.
    8.For simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM files.
    9.Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
    10.RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
    11.Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and file activities.

   

*. What are different types of redundancies in ASM & explain?

    Normal redundancy - : for 2-way mirroring, requiring two failure groups, when ASM allocates an extent for a normal     redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy.
    High redundancy - for 3-way mirroring, requiring three failure groups, in this case the extent is mirrored across 3 disks.
    External redundancy - to not use ASM mirroring. This is used if you are using hardware mirroring or third party redundancy mechanism like RAID, Storage arrays.

*. How to find out the databases, which are using the ASM instance?
    ASMCMD> lsct
    SQL> select DB_NAME from V$ASM_CLIENT;

*. What is a diskgroup?

    A disk group consists of multiple disks and is the fundamental object that ASM manages. Each disk group contains the metadata that is required for the management of space in the disk group. The ASM instance manages the metadata about the files in a Disk Group in the same way that a file system manages metadata about its files. However, the vast majority of I/O operations do not pass through the ASM instance. In a moment we will look at how file
    I/O works with respect to the ASM instance.
   
*. What happens when an Oracle ASM diskgroup is created?
    When an ASM diskgroup is created, a hierarchialfilesystem structure is created.
   
*. What is an incarnation number?
    An incarnation number is a part of ASM filename syntax. It is derived from the timestamp. Once the file is created, its incarnation number doesnot change.



*. What is ASM striping ?
    ASM spreads data evenly across all disks in a disk group to optimize performance and utilization.    This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.
    To balance loads across all of the disks in a disk group
    To reduce I/O latency

*. What are different types of stripings in ASM & their differences?
    1.Fine Striping :- Fine striping writes 128 KB data to each ASM Disk in the diskgroup in a round robin fashion, 128 KB goes tothe first disk, then the next 128 KB, goes to the next disk, etc. According to manual, The fine-grained stripe size always equals 128 KB in any configuration; this provides lower I/O latency for small I/O operations.” Small I/O operations sure sounds like a good candidate for redo logs, control files etc. The size for coarse striping can be set using the "_asm_ausize parameter."
   
    2.Coarse-grained striping :-With coarse grained striping ASM writes data to each disk in the same round robin fashion, but writes chunks in the size of the ASM instance’s allocation unit (AU) size, default is 1MB. The size for fine grained striping can be set using the "_asm_stripesize" parameter.
   
*. We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1 large disk group, a couple of     disk groups, or otherwise?

    For VLDBs you will probably end up with different storage tiers; e.g with some of our large customers they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3 (SATA), etc. Each one of these is mapped to a diskgroup.
   
*. Can my RDBMS and ASM instances run different versions?

    Yes. ASM can be at a higher version or at lower version than its client databases. There’s two
    components of compatiblity:
    Software compatibility
    Diskgroup compatibility attributes:
    "compatible.asm"    ---alter diskgroup dg2 set attribute 'compatible.asm'='11.1';
    "compatible.rdbms" ----alter diskgroup dg2 set attribute 'compatible.rdbms'='11.1';
   
*. Where do I run my database listener from; i.e., ASM HOME or DB HOME?

    It is recommended to run the listener from the ASM HOME. This is particularly important for RAC env, since the listener is a node-level resource. In this config, you can create additional [user] listeners from the database homes as needed.

*. How do I backup my ASM instance?
    Not applicable! ASM has no files to backup, as its does not contain controlfile,redo logs etc.

*. When should I use RMAN and when should I use ASMCMD copy?

    RMAN is the recommended and most complete and flexible method to backup and transport database files in ASM.
    ASMCMD copy is good for copying single files
    • Supports all Oracle file types
    • Can be used to instantiate a Data Guard environment
    • Does not update the controlfile
    • Does not create OMF files

*. I’m going to do add disks to my ASM diskgroup, how long will this rebalance take?

    Rebalance time is heavily driven by the three items:
    1) Amount of data currently in the diskgroup
    2) IO bandwidth available on the server
    3) ASM_POWER_LIMIT or Rebalance Power Level (GV$ASM_OPERATION;)

    ASM_POWER_LIMIT :  specifies the disk rebalancing speed of the ASM instance. The higher the limit, the faster rebalancing operation, but consume lot of CPU. Lower values will take longer, but consume fewer processing and I/O resources.
        • Background process "ARBx" performs the rebalance activity (where x is a number).
        • If the POWER clause of a rebalance operation is not specified, then the default power will be the value of            ASM_POWER_LIMIT.
        • The range of asm_power_limit is from 0 to 1024. Default is one.
       


*. We are migrating to a new storage array. How do I move my ASM database from storage A to storage B?
    Given that the new and old storage are both visible to ASM, simply add the new disks to the ASM disk group and drop the old disks. ASM rebalance will migrate data online. (NYBC)



*. How does ASM work with multipathing software?
    It works great! Multipathing software is at a layer lower than ASM, and thus is transparent.
    You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing pseudo devices.

*. What are the file types that ASM support and keep in disk groups?

    Control files ,Flashback logs ,Data Pump dump sets ,Data files ,DB SPFILE ,Data Guard configuration ,Temporary data files ,RMAN backup sets ,Change tracking bitmaps ,Online redo logs ,RMAN data file copies ,OCR files ,Archive logs ,Transport data files ,ASM SPFILE.

   
*.  Below are the HEADER_STATUS in the v$ASM_DISK. I have taken below status from 11gR2.
·UNKNOWN         - Automatic Storage Management disk header has not been read
·CANDIDATE         - Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement
·INCOMPATIBLE    - Version number in the disk header is not compatible with the Automatic Storage Management software version.
·PROVISIONED     - Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement. The                  PROVISIONED header status is different from the CANDIDATE header status in that PROVISIONED implies that an                   additional platform-specific action has been taken by an administrator to make the disk available for                  Automatic Storage Management.
·MEMBER         - Disk is a member of an existing disk group. No attempt should be made to add the disk to a different disk                  group. The ALTER DISKGROUP statement will reject such an addition unless overridden with the FORCE option
·FORMER         - Disk was once part of a disk group but has been dropped cleanly from the group. It may be added to a new                     disk group with the ALTER DISKGROUP statement.
·CONFLICT         - Automatic Storage Management disk was not mounted due to a conflict
·FOREIGN         - Disk contains data created by an Oracle product other than ASM. This includes datafiles, logfiles, and OCR                   disks.

*. Whats is Kfed?
    kfed is a utility which can be used to view the ASM Disk information. Syntax for using it is
    kfed read devicename

*.  ASM Specific Init.ora Parameters
    .cluster_database= true
    .asm_diskstring = '/dev/sd*1'
    .instance_type=asm
    .shared_pool_size=100M
    .large_pool_size = 80M
    .db_cache_size=60M
    .asm_diskgroups = 'DATA','FRA'
    .processes=128   
       
*. ASM Instance Background Processes:
   ---------------------------------
    •ARBx (ASM) : Rebalance working process ARBn performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these processes running at a time, named ARB0, ARB1, and so on.These processes are managed by the RBAL process. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

    •RBAL (Re-balancer)  : RBAL runs in both database and ASM instances. In the database instance, it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity for disk groups.RBAL, which coordinates rebalance activities
    for disk resources controlled by ASM.

    Database Instance ASM Background Processes:
    ------------------------------------------
    In the database instances, there are three background process to support ASM, namely:

    •ASMB, : this process contact CSS using the group name and acquires the associated ASM connect string. The connect string is subsequently used to connect to the ASM instance.

    •RBAL, :which performs global opens on all disks in the disk group.A global open means that more than one database instance can be accessing the ASM disks at a time.

    .O00x, : a group slave processes, with a numeric sequence starting at 000.
   

    PATCHING : (way of patching " best way use auto patch ")
        command  : $ORACLE_HOME/OPatch/./opatch auto  $DIR/$GRID_DB_PATCH -oh $ORACLE_HOME  -ocmrf $ORACLE_HOME/dbs/ocm.rsp
        OCW      : primary patch for ASM
        ACFS      : seconday patch for ASM
        DB PSU      : optinal but manditory patch for asm
       
REM VIEW            |ASM INSTANCE                                     |DB INSTANCE
REM ----------------------------------------------------------------------------------------------------------
REM V$ASM_DISKGROUP |Describes a disk group (number, name, size       |Contains one row for every open ASM
REM                 |related info, state, and redundancy type)        |disk in the DB instance.
REM V$ASM_CLIENT    |Identifies databases using disk groups           |Contains no rows.
REM                 |managed by the ASM instance.                     |
REM V$ASM_DISK      |Contains one row for every disk discovered       |Contains rows only for disks in the
REM                 |by the ASM instance, including disks that        |disk groups in use by that DB instance.
REM                 |are not part of any disk group.                  |
REM V$ASM_FILE      |Contains one row for every ASM file in every     |Contains rows only for files that are
REM                 |disk group mounted by the ASM instance.          |currently open in the DB instance.
REM V$ASM_TEMPLATE  |Contains one row for every template present in   |Contains no rows.
REM                 |every disk group mounted by the ASM instance.    |
REM V$ASM_ALIAS     |Contains one row for every alias present in      |Contains no rows.
REM                 |every disk group mounted by the ASM instance.    |
REM v$ASM_OPERATION |Contains one row for every active ASM long       |Contains no rows.
REM                 |running operation executing in the ASM instance. |

   
COMMANDS :

    /etc/init.d/oracleasm
    start-----start asm instance
    stop-----stop asm instance
    restart---restart asm
    configure--set configuration of asm ( -i)
    status----to check asm status
    enable----enable asm services
    disable---disable asm services
    listdisks--check the asm disks
    deletedisk---delete disk from asm
    scandisks----scan disk from diskgreoup
    querydisk /dev/sdd1 ----check disk on which disk greoup
    createdisk VOL1 /dev/sdb1 ----create disk using file system
    renamedisk /dev/sdb1 VOL1 ----rename disk
   
   

    ADD --         srvctl add asm -n node_name -i +asm_instance_name -o oracle_home
    REMOVE--    srvctl remove asm -n node_name [-i +asm_instance_name]
    ENABLE --    srvctl enable asm -n node_name [-i ] +asm_instance_name
    DISABLE --    srvctl disable asm -n node_name [-i +asm_instance_name]
    START --    srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options]
    STOP --        srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options]
    CONFIG--    srvctl config asm -n node_name
    STATUS --    srvctl status asm -n node_name
   
    CHECK REBALABCE : v$asm_operation.
    CHECCK CLIENT     : v$asm_clients or lsct
    CHECK SPACE     : v$asm_disk, v$asm_diskgroup
    CHECK ALISE     : v$asm_alias    (X$KFFXP)
   
    create diskgroup  : create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/oraASMdg/fra1'
                        --ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';
   
    alter diskgroup
        check:         alter diskgroup FRA1  check all;
        mount:      alter diskgroup FRA1 mount;
        add disk:    alter diskgroup FRA1 add disk '/dev/vx/rdsk/oraASMdg/fra2'; OR  '/dev/vx/rdsk/oraASMdg/fra*';
        remove disk:alter diskgroup FRA1 drop disk 'FRA1_0002';
        rebalance : alter diskgroup DATA1 rebalance power 10;

    drop diskgreoup: drop diskgroup DATA1 including contents;
   
   
    •copy command :
        ASM to filesystem -->
        ASMCMD> cp dumpfile1.dmp dumpfile1.dmp dumpfile1.dmp /oracle/backup/testdb/expdp
                copying +FRA/TESTDB/EXPDP/dumpfile1.dmp -> /oracle/backup/testdb/expdp/dumpfile1.dmp
                copying +FRA/TESTDB/EXPDP/dumpfile2.dmp -> /oracle/backup/testdb/expdp/dumpfile2.dmp
                copying +FRA/TESTDB/EXPDP/dumpfile3.dmp -> /oracle/backup/testdb/expdp/dumpfile3.dmp   
       
        •File system to asm : -->
        ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile1.dmp '+FRA/TESTDB/EXPDP/'
                copying /oracle/backup/testdb/expdp/dumpfile1.dmp -> +FRA/TESTDB/EXPDP/dumpfile1.dmp
        ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile2.dmp '+FRA/TESTDB/EXPDP/'
                copying /oracle/backup/testdb/expdp/dumpfile2.dmp -> +FRA/TESTDB/EXPDP/dumpfile2.dmp
        ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile3.dmp '+FRA/TESTDB/EXPDP/'
                copying /oracle/backup/testdb/expdp/dumpfile3.dmp -> +FRA/TESTDB/EXPDP/dumpfile3.dmp
        ASMCMD> ls
                dumpfile1.dmp
                dumpfile2.dmp
                dumpfile3.dmp
       
        OR 
                set timing on
                BEGIN
                dbms_file_transfer.get_file('SOURCE_DUMP',
                'test.dmp',
                'SOURCEDB',
                'TARGET_DUMP',
                'test.dmp');
                END;
                /
               
        OR
       
        RMAN> copy datafile '/u01/oradata/racdb/trtst01.dbf' to '+DATADG';
       
        RMAN> COPY DATAFILE '+ASMDISK2/orcl/datafile/users.256.565313879' TO '+ASMDISK1';

    •If you have used RMAN (method 4 b) use the following option of RMAN
        RMAN>
        run
        {
        set newname for datafile '+ASMDISK2/orcl/datafile/users.256.565313879'
        to '+ASMDISK1/orcl/datafile/users.259.565359071';
        switch datafile all;
        }
       
    •Delete the datafile from its original location.

        SQL> ALTER DISKGROUP ASMDISK2 DROP FILE users.256.565313879;
        or
        ASMCMD> rm -rf <filename>