Tuesday, October 5, 2010

Monitoring Alert Log

Monitoring Alert Log has become lot easier with Oracle 11g . In Oracle 11g , the contents of the alert log are exposed via fixed table . 

Also , in  Oracle 11g , the alert log is also stored in xml format . You can ADRCI to view / purge the contents . For ADRCI , please see one of my post here ( http://mfzahirdba.blogspot.com/2010/05/health-check.html)

Prior to this release , you can construct an external table poiniting to alert log . For this approach , please refer to chapter 3 of Tom Kyte's Expert Oracle book .

Let us returned to using fixed table ( it does not seems to be documented ) . Alert log is exposed as "X$DBGALERTEXT".

Let us create a view , so that we can grant access to non sys users.

SQL> CREATE OR REPLACE VIEW SYS.ALERT_LOG_v
2 AS
3 SELECT * FROM X$DBGALERTEXT;



View created.

SQL> grant select on SYS.ALERT_LOG_v to scott ;

Grant succeeded.

SQL> conn scott/tiger



Connected.


SQL> desc sys.alert_log_v


Name Null?                                Type

----------------------------------------- -------- ----------------------------

ADDR                                      RAW(8)
INDX                                      NUMBER
INST_ID                                   NUMBER
ORIGINATING_TIMESTAMP                     TIMESTAMP(3) WITH TIME ZONE
NORMALIZED_TIMESTAMP                      TIMESTAMP(3) WITH TIME ZONE
ORGANIZATION_ID                           VARCHAR2(64)
COMPONENT_ID                              VARCHAR2(64)
HOST_ID                                   VARCHAR2(64)
HOST_ADDRESS                              VARCHAR2(46)
MESSAGE_TYPE                              NUMBER
MESSAGE_LEVEL                             NUMBER
MESSAGE_ID                                VARCHAR2(64)
MESSAGE_GROUP                             VARCHAR2(64)
CLIENT_ID                                 VARCHAR2(64)
MODULE_ID                                 VARCHAR2(64)
PROCESS_ID                                VARCHAR2(32)
THREAD_ID                                 VARCHAR2(64)
USER_ID                                   VARCHAR2(64)
INSTANCE_ID                               VARCHAR2(64)
DETAILED_LOCATION                         VARCHAR2(160)
PROBLEM_KEY                               VARCHAR2(64)
UPSTREAM_COMP_ID                          VARCHAR2(100)

....
.....
 
I can issue the following query  ( in a recurring job ) to see if there are lot of "ORA-" errors in the last 15 minutes .
 
SQL> SELECT message_text ,

2 client_id ,
3 originating_timestamp
4 FROM sys.alert_log_v
5 WHERE originating_timestamp >= sysdate-(1/24/60)*15
6 AND message_text LIKE '%%ORA-%%';


no rows selected


Monitoring Alert Log is made easier with this approach .

Wednesday, July 14, 2010

Database Independence...

Write one database code independent of DBMS ....


We will write the SQL to the lowest common denominator ... Should not use Vendor's Extensions ...

Sounds like noble cause ....


Let us translate into real world scenario ...

I have Chevy Impala which has built in OnStar and BMW with built in GPS ... We should not use as this is not common across these vehicles ....

Sounds right ? ....

For an example , Oracle has unique features like Connect By , Analytics , MODEL clause , Bitmap Index , etc.,  and SQL Server has filtered index , ....
These DBMS are catching up to each other on some of the features like PIVOT , Analytics ,...

Besides these , locking and concurrency is implemented in totally different way between these DBMS.

If you assume , this is the way it is supposed to work in Oracle from SQL Server's perspective , you could be in surprise.

Best way to learn a DBMS , is to learn the DBMS from the scratch ( unlearn the stuff from other DBMS ) .

Best way to use the DBMS is to explore the features of that DBMS ( be it Oracle , DB2 , Sybase / SQL Server , MySQL .. )  and use it to the fullest extent possible.

Best way to achieve database indepdence , is to have transactional APIS built into the application that will call the database with packaged procedures / functions ( in oracle ) or Procedures and Functions ( in Non Oracle ) .... ie , there will no embedded SQL ....  By this method  , we can utlize the DB-centric calls properly ...

We can have the cake and eat it too :- )

Friday, June 4, 2010

Using Partition Outer Join

Most of the time , the analyst would need a report to display the activity for a month on a daily basis.

For me , I have scripted the data import job from external system and I would like to see the recordcount on a daily basis for the past one week.

Let us see an example . Let us create a table and populate data into the table.

And here is the table structure and the data .

SQL> desc datalog
Name Null? Type
----------------------------------------- -------- ----------------------------
LOADDATE NOT NULL DATE
TABLENAME VARCHAR2(30)
ROWCOUNT NUMBER


SQL> select * from datalog;


LOADDATE TABLENAME ROWCOUNT
--------- ------------------------------ ----------
29-MAY-10 ORDER 5012577
29-MAY-10 PAYMENT 165851
30-MAY-10 ORDER 5012577
30-MAY-10 PAYMENT 5073
31-MAY-10 ORDER 5012577
31-MAY-10 PAYMENT 5073
01-JUN-10 ORDER 5003
01-JUN-10 PAYMENT 5073
02-JUN-10 ORDER 5003
02-JUN-10 PAYMENT 5073
04-JUN-10 ORDER 5003
04-JUN-10 PAYMENT 5073

12 rows selected.

Here is the SQL to display the report using Partition Outer Join to show the date , though it may not have the data ( aka rowcount for that date , 3rd June , in our example) . Also , I would like to transpose it using PIVOT so that we can have a better view ( like a cross tab report )
Here is the SQL and its output.

SQL> SELECT tablename ,
2 DAY1,
3 DAY2,
4 DAY3,
5 DAY4,
6 DAY5,
7 DAY6,
8 DAY7
9 FROM
10 (SELECT Load.Tablename ,
11 NVL(Load.RowCount, 0 ) RowCount ,
12 row_number() over ( partition BY Tablename order by listdatetime DESC ) rn
13 FROM
14 (SELECT loaddate ,
15 Tablename ,
16 Rowcount
17 FROM DataLog
18 WHERE LoadDate >= Sysdate - 7
19 ) Load PARTITION BY (Tablename)
20 RIGHT OUTER JOIN
21 (SELECT to_date(sysdate - level + 1 )listdatetime
22 FROM Dual
23 CONNECT BY Level <= 7
24 ) Tim
25 ON ( TRUNC(LoadDate ) = TRUNC(Listdatetime) )
26 ) Pivot ( MAX(Rowcount) FOR Rn IN ( 1 AS Day1 , 2 AS Day2 , 3 AS Day3 , 4 AS Day4 , 5 AS Day5 , 6 AS Day6 , 7 AS Day7 ) )
27 ORDER BY tablename
28 ;

TABLENAME DAY1 DAY2 DAY3 DAY4 DAY5 DAY6 DAY7
------------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ----------
ORDER 5003 0 5003 5003 5012577 5012577 5012577
PAYMENT 5073 0 5073 5073 5073 5073 165851

Good luck.

Wednesday, May 12, 2010

Health Check

Oracle 11g provides a new tool to check the Health of the database .
The package is DBMS_HM. The database detects the issues , when it encounters them .
This can also be invoked manually . However , not all checks can be executed .

The following is the structure of the view v$hm_check


SQL> desc v$hm_check

ID               NUMBER
NAME             VARCHAR2(64)
NAME_NLS         VARCHAR2(1024)
CLSID            NUMBER
CLS_NAME         VARCHAR2(15)
FLAGS            NUMBER
INTERNAL_CHECK   VARCHAR2(1)
OFFLINE_CAPABLE  VARCHAR2(1)
DESCRIPTION      VARCHAR2(1024)
Only the checks with Internal Check <> 'Y' can be executed manually .
The results can be viewed thru ADRCI tool
These are the checks in v$hm_check .


SQL> col name format a35 wrapped
SQL> col description format a55 wrapped
SQL> Select name , description , internal_check from v$hm_check;


HM Test Check                       Check for health monitor functionality Y
DB Structure Integrity Check        Checks integrity of all database files N
CF Block Integrity Check            Checks integrity of a control file block N

Data Block Integrity Check          Checks integrity of a data file block N

Redo Integrity Check                Checks integrity of redo log content N

Logical Block Check                 Checks logical content of a block Y
Transaction Integrity Check         Checks a transaction for corruptions N
Undo Segment Integrity Check        Checks integrity of an undo segment N
No Mount CF Check                   Checks control file in NOMOUNT mode Y
Mount CF Check                      Checks control file in mount mode Y
CF Member Check                     Checks a multiplexed copy of the control Y
file
All Datafiles Check                 Checks all datafiles in the database Y
Single Datafile Check               Checks a data file Y
Tablespace Check Check              Checks a tablespace Y
Log Group Check                     Checks all members of a log group Y
Log Group Member Check              Checks a particular member of a log grou Y
p


.....
.....27 rows selected.


SQL> exec dbms_hm.run_check('Dictionary Integrity Check' , 'HM run');
PL/SQL procedure successfully completed.

View the results of the check in ADRCI.

D:\test>adrci
ADRCI: Release 11.2.0.1.0 - Production on Wed May 12 13:15:03 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
ADR base = "c:\app\oracle"
adrci>






**********************************************************


HM RUN RECORD 9


**********************************************************


RUN_ID                           21

RUN_NAME                         HM run

CHECK_NAME                       Dictionary Integrity Check

NAME_ID                          24

MODE                             0

START_TIME                       2010-05-12 13:09:22.671000 -04:00
RESUME_TIME                     
END_TIME                         2010-05-12 13:09:37.781000 -04:00
MODIFIED_TIME                    2010-05-12 13:09:37.781000 -04:00
TIMEOUT                          0
FLAGS                            0
STATUS                           5
SRC_INCIDENT_ID                  0
NUM_INCIDENTS                    0
ERR_NUMBER                       0
REPORT_FILE                      



As mentioned above , only the non-internal checks can be invoked manually .
As you can see , the following gives the error message , as this is a internal check .

SQL> exec dbms_hm.run_check('CF Member Check' , 'HM run');
BEGIN dbms_hm.run_check('CF Member Check' , 'HM run'); END;
*
ERROR at line 1:
ORA-51001: check [CF Member Check] not found in HM catalog
ORA-06512: at "SYS.DBMS_HM", line 191
ORA-06512: at line 1


Also , some of the checks are only for ASM instance .

SQL> exec dbms_hm.run_check('ASM Disk Visibility Check' , 'HM check asm run');
BEGIN dbms_hm.run_check('ASM Disk Visibility Check' , 'HM check asm run'); END;
*
ERROR at line 1:
ORA-51037: check [ASM Disk Visibility Check] can only be executed in ASM
instance
ORA-06512: at "SYS.DBMS_HM", line 191
ORA-06512: at line 1

Welcome any feedback.

Friday, April 2, 2010

Case Sensitiveness

Oracle maintains the case sensitiveness by default .


To make the queries , we would need to change the NLS settings to enforce case insenstiveness .

Here is an example ....

Note : As usual , please regression test your application before making these changes


SQL> show parameter nls
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
nls_calendar string
nls_comp string BINARY
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string


SQL>



SQL> select * from scott.emp where ename like 'smith';
no rows selected


SQL> alter session set nls_comp=linguistic;
Session altered.

SQL> alter session set nls_sort = binary_ci;
Session altered.

SQL> select * from scott.emp where ename like 'smith';


EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20


SQL> select * from scott.emp where ename like 'sMith';

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20

SQL> select * from scott.emp where ename like 'SMITH';


EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20

Tuesday, March 16, 2010

Spreadsheet like calcing (Using MODEL clause)

Oracle 10g introduced MODEL clause to simulate spreadsheet like calculations in SQL . It is complex but yet powerful . There us a chapter on this clause in the dataware housing guide .

I  really wish this ( along with analytics)  should be moved to Application Developers Guide

Here is an example ... In the following example , we create a new column called new_sal , and give all the employees 50% rise ( .. in this economy , this will be a jackpot... )

SQL>  SELECT empno,ename , deptno, sal , new_sal
  2   FROM scott.emp
  3   MODEL
  4   PARTITION BY (empno , ename , sal)
  5   DIMENSION BY (deptno)
  6   MEASURES (sal new_sal) IGNORE NAV
  7   RULES ( new_sal[ANY] = CV(sal) * 1.5)
  8   ORDER BY deptno;



     EMPNO ENAME          DEPTNO        SAL    NEW_SAL
---------- ---------- ---------- ---------- ----------
      7782 CLARK              10       2450       3675
      7934 MILLER             10       1300       1950
      7839 KING               10       5000       7500
      7902 FORD               20       3000       4500
      7566 JONES              20       2975     4462.5
      7788 SCOTT              20       3000       4500
      7876 ADAMS              20       1100       1650
      7369 SMITH              20        800       1200
      7844 TURNER             30       1500       2250
      7900 JAMES              30        950       1425
      7698 BLAKE              30       2850       4275
      7521 WARD               30       1250       1875
      7499 ALLEN              30       1600       2400
      7654 MARTIN             30       1250       1875


14 rows selected.


Now , I can create additional records ( to display... it is not saved into database yet ) by the following SQL . Here , I am assigning all the employees to dept 10 ( in addition to their departments).  

For example , you can see the employee highlighted in both depts ( 20 and 10) 




  1   SELECT empno,ename , deptno, sal , new_sal
  2   FROM scott.emp
  3   MODEL
  4   PARTITION BY (empno , ename , sal)
  5   DIMENSION BY (deptno)
  6   MEASURES (sal new_sal) IGNORE NAV
  7   RULES ( new_sal[10] = CV(sal) * 1.5)
  8*  ORDER BY deptno
SQL> /

     EMPNO ENAME          DEPTNO        SAL    NEW_SAL
---------- ---------- ---------- ---------- ----------
      7934 MILLER             10       1300       1950
      7902 FORD               10       3000       4500
      7844 TURNER             10       1500       2250
      7788 SCOTT              10       3000       4500
      7900 JAMES              10        950       1425
      7698 BLAKE              10       2850       4275
      7369 SMITH              10        800       1200
      7654 MARTIN             10       1250       1875
      7876 ADAMS              10       1100       1650
      7782 CLARK              10       2450       3675
      7521 WARD               10       1250       1875
      7839 KING               10       5000       7500
      7499 ALLEN              10       1600       2400
      7566 JONES              10       2975     4462.5
      7902 FORD               20       3000       3000
      7369 SMITH              20        800        800

      7788 SCOTT              20       3000       3000
      7566 JONES              20       2975       2975
      7876 ADAMS              20       1100       1100
      7521 WARD               30       1250       1250
      7499 ALLEN              30       1600       1600
      7900 JAMES              30        950        950
      7844 TURNER             30       1500       1500
      7654 MARTIN             30       1250       1250
      7698 BLAKE              30       2850       2850

25 rows selected.





If I am interested only the records that were impacted by this rule , I can run the following SQL.

  1   SELECT empno,ename , deptno, sal , new_sal
  2   FROM scott.emp
  3   MODEL return updated rows
  4   PARTITION BY (empno , ename , sal)
  5   DIMENSION BY (deptno)
  6   MEASURES (sal new_sal) IGNORE NAV
  7   RULES ( new_sal[10] = CV(sal) * 1.5)
  8*  ORDER BY deptno
SQL> /

     EMPNO ENAME          DEPTNO        SAL    NEW_SAL
---------- ---------- ---------- ---------- ----------
      7934 MILLER             10       1300       1950
      7499 ALLEN              10       1600       2400
      7566 JONES              10       2975     4462.5
      7521 WARD               10       1250       1875
      7782 CLARK              10       2450       3675
      7876 ADAMS              10       1100       1650
      7839 KING               10       5000       7500
      7369 SMITH              10        800       1200
      7698 BLAKE              10       2850       4275
      7900 JAMES              10        950       1425
      7788 SCOTT              10       3000       4500
      7844 TURNER             10       1500       2250
      7902 FORD               10       3000       4500
      7654 MARTIN             10       1250       1875


14 rows selected.

You can go for iterations using MODEL clause. 

If I have a initial value of 500 and if reduce that value by 50% in each iteration , then at the 9th iteration , the values goes below 1.



SQL> SELECT value_below_breakeven , num_iterations, breakeven_value  FROM dual
  2   MODEL
  3   DIMENSION BY (1 breakeven_value)
  4   MEASURES (500 value_below_breakeven, 0 num_iterations)
  5   RULES ITERATE (20) UNTIL (value_below_breakeven[1]<=1)
  6  (value_below_breakeven[1] = value_below_breakeven[1]/2,
  7   num_iterations[1] = num_iterations[1] + 1);

VALUE_BELOW_BREAKEVEN NUM_ITERATIONS BREAKEVEN_VALUE
--------------------- -------------- ---------------
             .9765625              9               1

 

Here are the iteration and its value.


Iteration Value 
0500
1250
2125
362.5
431.25
515.625
67.8125
73.90625
81.953125
90.976563






Good luck with the MODEL clause.

Tuesday, February 23, 2010

Transposing the data

Before Oracle 11g and SQL Server 2005 , transposing the data was bit cumbersome .

Transpong the data - if we want to transpose the columns into rows .

For example , if a person has multiple phone numbers and if we want to show the phone numbers on one record ... we could use PIVOT function .

To transpose rows into columns , we would use UNPIVOT .

SQL> Create table person
2 ( personid integer ,
3 telephonenumber varchar2(50) ) ;
Table created


SQL> insert into person values ( 100015 , 2484775248);
1 row inserted

SQL> insert into person values( 100015 , 9085014258);
1 row inserted

SQL> insert into person values( 100015 , 2015579964);
1 row inserted


SQL> insert into person values( 100010 , 7328225687);
1 row inserted


SQL> insert into person values( 100010 , 5328225687);


1 row inserted

Before 11g ...

SELECT personid ,
MAX(DECODE( rn ,1 , telephonenumber)) Phone_1 ,
MAX(DECODE( rn ,2 , telephonenumber)) Phone_2 ,
MAX(DECODE( rn ,3 , telephonenumber) ) Phone_3
FROM
(SELECT personid ,
telephonenumber ,
row_number() over ( partition BY personid order by personid) rn
FROM person
WHERE personid IN (100015 , 100010)
)
GROUP BY personid
 
 
In Oracle 11g / SQL Server 2005 .
 
SQL> SELECT *

2 FROM (SELECT personid , telephonenumber , row_number() over ( partition by personid order by personid) rn
3 FROM person
4 WHERE personid IN (100015 , 100010
5 ) )pivot(max(telephonenumber) for rn IN (1 ,2 , 3 ))
6 /




PERSONID 1 2 3


--------------------------------------- -------------------------

100010 7328225687 5328225687


100015 9085014258 2484775248 2015579964


In the above SQL , we know the maximum number of the columns to pivoted before hand.
If the number of columns ( to be pivoted) is dynamic , you would use PIVOT ... for XML.

Monday, February 1, 2010

Real Time Monitoring / Tracing

I have to say , DBMS_APPLICATION_INFO is one of the under utilized package from Oracle . This is very valuable in tracing , identifying the long running operations , offending sessions ,  if you will .

This can  be valuable

a)  in multi-tier applications , where you need to  trace for a particualr module using DBMS_MONITOR
b) long running operations

SQL> show user



USER is "DBADMIN"


SQL> exec DBMS_APPLICATION_INFO.set_module(module_name => 'Demoing App Info' ,action_name => 'Demonstration');
 
SQL> col username format a14
SQL> col module format a25
SQL> col action format a35
 
SQL> Select username , module , action from GV$SESSion where username ='DBADMIN';
USERNAME MODULE ACTION
-------------- ------------------------- -----------------------------------
DBADMIN Demoing App Info Demonstration

As you can see , it is immediately visible in v$session . You need NOT commit to see the changes.

Let us say , you are going long running operations ( in a loop )  , you can set the action to be indicative of the counter . As you see , the value of module gets carried over , if the following syntax is used.

SQL> exec DBMS_APPLICATION_INFO.set_action(action_name => 'Processing 1 of 100');


PL/SQL procedure successfully completed.

SQL> Select username , module , action from GV$SESSion where username ='DBADMIN';

USERNAME MODULE ACTION
-------------- ------------------------- ----------------------------------
DBADMIN Demoing App Info Processing 1 of 100



This property  can be also set via Java .
 
metrics = new String
OracleConnection.END_TO_END_STATE_INDEX_MAX];
metrics[OracleConnection.END_TO_END_CLIENTID_INDEX] = "Zahir M";
metrics[OracleConnection.END_TO_END_MODULE_INDEX] = "Hello.java"; metrics[OracleConnection.END_TO_END_ACTION_INDEX] =
 "testing Hello ";
((OracleConnection)connection).setEndToEndMetrics(metrics, (short)0);

Sunday, January 10, 2010

Multiple Table Insert

The multi table insert is one of the great features that will  benefit  the developers in ETL like processing . Basically , you would select the data from the datastore and insert into the multiple tables.

 Let us look at an example.

SQL> select count(*) from all_objects;

COUNT(*)
----------
69495

SQL> Create table t1 as Select object_id , object_name from all_objects Where 1 = 2;
Table created.

SQL> Create table t2 as Select object_id , object_name from all_objects Where 1 = 2;
Table created.

SQL> Insert

2 All
3 When (Object_Type) <> 'TABLE' Then
4 Into T1
5 (Object_Id , Object_Name )
6 Values
7 (500 , Object_Name )
8 When (Object_Type) = 'TABLE' Then
9 Into T2
10 (Object_Id , Object_Name )
11 Values
12 (600 , Object_Name )
13 Select Object_Name , Object_Type
14 From All_Objects;

69497 rows created.

SQL> select count(*) from t1;

COUNT(*)
----------
66604

SQL> select count(*) from t2;

COUNT(*)
----------
2893

 



As you can see , we loaded the data from all_objects to different tables t1 and t2. 

In my tenure with Barnes & Noble , I have used this feature ( and the error logging feature , which I will cover in the next few posts )  heavily , there by reducing the data load time from hours to minutes.