Monday, December 29, 2008

Using LogMiner Viewer to Perform a Logical Recovery

Module Objectives
Purpose

In this module, you will learn how to use LogMiner to analyze your redo log files so that you can logically recover your database.
Objectives

After completing this module, you should be able to:
Use LogMiner to analyze the redo log files
Perform a logical recovery of the database
Prerequisites

Before starting this module, you should have:


Preinstallation Tasks


Install the Oracle9i Database


Postinstallation Tasks
Review the Sample Schema
Enabling Archiving
Downloaded the logminer.zip module files and unzipped them into your working directory
Reference Material

The following is a list of useful reference material if you want additional information about the topics in this module:


Documentation: A96521-01: Oracle9i Database Administrator's Guide


Overview

In this module, you will examine the following topics:
Analysis of Redo Logs
Oracle9i LogMiner
Analysis of Redo Logs

Knowing what changes have been made to data is useful for many things:
Trace and audit requirements
Application debugging
Logical recovery
Tuning and capacity planning

The redo logs provide a single source of data, including:
All of the data needed to perform recovery (a record of every change to the data and the metadata or data structures)
All of the data needed to pinpoint when a logical corruption to the database began. It is important to pinpoint this time so that you know how to perform a time-based or change-based recovery that restores the database to the state just prior to corruption.
An alternative, supplemental source of data for tuning and capacity planning

Many customers keep this data in archived files, but using redo logs as a source of this information means that the data is always available. No resources need to be used for data collection or overhead (although the data still needs to be translated into usable form).

You can analyze redo logs for historical trends, data access patterns, and frequency of changes to the database, as well as to track specific sets of changes based on transaction, user ID, table, block, and so on.
Oracle9i LogMiner Viewer

Oracle9i LogMiner is a powerful tool that reads the redo logs to provide direct access to the changes that have been made to your databases. With LogMiner, you can audit past database activity, pinpoint when logical corruption occurred and where to perform statement-level recovery. LogMiner also gives you a wealth of information for diagnosing problems with an application, analyzing resource utilization and historical performance, and tuning the database.

LogMiner Viewer is a graphical user interface integrated with Enterprise Manager. LogMiner Viewer allows you to easily specify queries to view data from the redo logs, including SQL redo and undo statements.

In Oracle8i, LogMiner was introduced as the only Oracle-supported method of accessing the redo logs. This module highlights the new features introduced with Oracle9i LogMiner Viewer: additional data types and storage types, additional DDL statements, queries based on actual data values in the redo logs, and continuous monitoring of the redo stream.
Using LogMiner: Steps

In this module, you will use LogMiner to analyze redo log files and logically recover the database. After enabling archiving in the production environment, you will perform the following steps:
1.

Perform an operation which corrupts the database.
2.

Use LogMiner Viewer to analyze the redo log files.
3.

Perform a logical recovery.

Step 3, in turn, has several substeps, which will be detailed later.

Performing an Operation Which Corrupts the Database

Go Back to List

Now you will create a situation in the database that could lead to the need to perform a logical recovery. You will change the data schema definition for the OE user in such a way that deleting a single product will leave the tables logically corrupt.
1.

From a SQL*Plus session connected to the production database, execute the following SQL script:

@alter_table.sql

CONNECT OE/OE@orcl.world

ALTER TABLE Order_Items
DROP CONSTRAINT Order_Items_Product_ID_FK;

ALTER TABLE Inventories
DROP CONSTRAINT Inventories_Product_ID_FK;

ALTER TABLE Product_Descriptions
DROP CONSTRAINT PD_Product_ID_FK;

ALTER TABLE Order_Items
ADD CONSTRAINT Order_Items_Product_ID_FK
FOREIGN KEY (Product_Id)
REFERENCES Product_Information
ON DELETE CASCADE;

ALTER TABLE Inventories
ADD CONSTRAINT Inventories_Product_ID_FK
FOREIGN KEY (Product_Id)
REFERENCES Product_Information
ON DELETE CASCADE;

Although these are perfectly valid DDL statements, adding the ON DELETE CASCADE clauses to these constraints causes all ORDER_ITEMS and ORDERS records to be automatically deleted whenever a product is deleted.
2. To illustrate this, delete a product from the product_information table. From a SQL*Plus session connected to the production database, execute the following SQL script:

@delete_product.sql
CONNECT OE/OE@orcl.world
DELETE FROM Product_Information
WHERE Product_Id = 3000;
COMMIT;

Although only one row has been deleted from the product_information table, rows have also been deleted from the order_items and orders tables. This may not be discovered until later, when executing a report reveals the logical corruption.


3.

Now run a report that compares the sum of all order items in an order with the total for that order. From a SQL*Plus session connected to the production database, execute the following SQL script:

@comp_totals.sql

CONNECT OE/OE@orcl.world
SELECT so.order_id, SUM(so.order_total), SUM(si.total)
FROM ( SELECT order_id, SUM((unit_price * quantity)) AS total
FROM order_items
GROUP BY order_id
) si,
orders so
WHERE so.order_id = si.order_id
GROUP BY so.order_id
HAVING SUM(so.order_total) != SUM(si.total);

The inconsistency in the data is easy to see; notice that the sale order totals no longer reflect the total of their sale item lines. The data has become logically corrupt.


4.

In the SQL*Plus Session, execute the following script:

@force_switch.sql

connect sys/oracle@orcl.world as sysdba
alter system archive log current;
archive log list;


This switch has been forced to move DML statements from the on-line redo log to archived redo logs. The undo SQL statements to fix this will be mined from the archived redo logs.



You can now use LogMiner Viewer to detect what happened, and even to undo the transactions that caused the corruption.
Using LogMiner Viewer to Analyze the Redo Log Files

Go Back to List

You are interested in finding all the transactions performed on the product_information, order_items, and orders tables which may account for the discrepancy in your report. Therefore, you decide to analyze all the transactions in the list of archived redo log files.
1.

Connect to the OMS as sysman/sysman.

2.

Select Tools > Database Applications > LogMiner Viewer


3.

LogMiner Viewer requires a database account with SYSDBA privilege. You will log on as SYS, the preferred credentials are already defined in the Enterprise Manager environment.

From the Navigator pane, Right Click on orcl.world and Select Create Query.


4.

The three tables involved in the DELETE operation were PRODUCT_INFORMATION, ORDER_ITEMS, and ORDERS. We want to search for DELETE operations performed on objects owned by OE. The search should be

OWNER=OE AND OPERATION=DELETE

First from the drop down menu select OWNER, then type OE in the box after the equal. Then click AND.


5.

Select OPERTATION and enter DELETE.


6.

Click Execute.

The DELETE operations previously performed now appear in the Query Results panel at the bottom of the window. Although one delete operation was issued, the ON DELETE CASCADE OPTION deleted two additional rows. You may review any row by double clicking on the row.


7.

Click Close, after reviewing.


8.

Select the Display Options tab. Uncheck everything but SQL_UNDO. Then click Execute.


9.

Click the Save Redo/Undo... button.


10.

Change Format to Text. Change File Name to D:\wkdir\sqlundo.sql. Then click OK.


11.

To exit click Cancel then File > Exit.




Performing a Logical Recovery

Go Back to List

Performing a logical recovery consists of several substeps:
1 Performing the logical recovery
2 Verifying the logical recovery


1. Performing the Logical Recovery

To undo the changes that were made, use the generated script by performing the following steps:
1.

In SQL*Plus connect as OE/OE and execute the script sqlundo.sql. Then type COMMIT.

connect oe/oe@orcl.world
@sqlundo.sql
commit;


2. Verifying the Logical Recovery

To test that the data has been recovered, perform the following steps:
1.

Run the same report, which compares the sum of all orderitems for an order with the total for the order. From a SQL*Plus session connected to the production database, execute the following script:

@comp_totals.sql

CONNECT OE/OE@orcl.world

SELECT so.order_id, SUM(so.total), SUM(si_total)
FROM (
SELECT si.order_id si_order_id,
SUM(si.total)AS si_total
FROM order_items si
GROUP BY si.order_id), sale_orders so
WHERE so.order_id = si_order_id
GROUP BY so.order_id
HAVING SUM(so.total) != SUM(si_total);


This time there are no sale order totals that do not equal the sum of their order items. You have fully recovered the database from the mistake.




Resetting the Environment

Now you should reset the foreign key constraints for the order_items and orders tables back to their correct versions.
1.

From a SQL*Plus session connected to the production database, enter @reset_constraints.sql to reset the constraints to their original version.

@reset_constraints.sql

CONNECT OE/OE@orcl.world

ALTER TABLE Order_Items
DROP CONSTRAINT Order_Items_Product_ID_FK;
ALTER TABLE Inventories
DROP CONSTRAINT Inventories_Product_ID_FK;
ALTER TABLE Order_Items ADD CONSTRAINT Order_Items_Product_ID_FK
FOREIGN KEY (Product_Id)
REFERENCES Product_Information;
ALTER TABLE Inventories ADD CONSTRAINT Inventories_Product_ID_FK
FOREIGN KEY (Product_Id)
REFERENCES Product_Information;
ALTER TABLE Product_Descriptions ADD CONSTRAINT PD_Product_ID_FK
FOREIGN KEY (Product_Id)
REFERENCES Product_Information;


Module Summary

In this module, you should have learned how to:
Use LogMiner to analyze the redo log files
Perform a logical recovery of the database

No comments:

Post a Comment