Wednesday, November 4, 2009

Analytics 2.0 - Part II

Yet another neat addition to the family of Analytics in 11g Release 2.
With this new function , I can look up the value of nth row .

The following gives the 4th salary from the dept 30 ( look at the column nv) .

1 SELECT empno,
2 ename,
3 deptno,
4 sal,
5 first_value(sal) over(PARTITION BY deptno ORDER BY empno) fv,
6 nth_value((sal), 4) over(PARTITION BY deptno ORDER BY empno) nv,
7 last_value(sal) over(PARTITION BY deptno ORDER BY empno
8 ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) lv
9 FROM emp
10* WHERE deptno = 30


SQL> /


EMPNO      ENAME      DEPTNO     SAL        FV        NV          LV
---------- ---------- ---------- ---------- ---------- ---------- ----------
7499       ALLEN       30        1600        1600                  950
7521       WARD        30        1250        1600                  950
7654       MARTIN      30        1250        1600                  950
7698       BLAKE       30        2850        1600        2850      950
7844       TURNER      30        1500        1600        2850      950
7900       JAMES       30         950        1600        2850      950


6 rows selected.

Good luck ...

Size DOES Matter

We should rightly size the column definition . I have seen in various implemantations ,
defining right size , right data type  has taken back seat .

The wrong size would impact data integrity  , memory usage , network traffic and
runtime performance.

Let us take a look at runtime performance .

I am a creating two tables from the same source . The only difference is that the second
column varies by size .


SQL> desc t_normal
Name        Type         Nullable Default Comments
----------- ------------ -------- ------- --------
OBJECT_ID   NUMBER                                
OBJECT_NAME VARCHAR2(30)                          

SQL> desc t_4000
Name        Type           Nullable Default Comments
----------- -------------- -------- ------- --------
OBJECT_ID   NUMBER                                  
OBJECT_NAME VARCHAR2(4000)                          

SQL> truncate table t_normal;

Table truncated

SQL> truncate table t_4000;

Table truncated

SQL> insert /*+ APPEND */ into t_normal select object_id , object_name from all_objects
  2  ;

629424 rows inserted

SQL> insert /*+ APPEND */ into t_4000 select object_id , object_name from all_objects;

629424 rows inserted

SQL> commit;

Commit complete

Let us set the arraysize to 100 and time it .


SQL> set arraysize 100
SQL> set timing on

SQL> exec dbms_stats.gather_table_stats(user , 'T_NORMAL');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09

SQL> exec dbms_stats.gather_table_stats(user , 'T_4000');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10

SQL>select object_id , object_name from t_normal;
......
......
.....


629424 rows selected.

Elapsed: 00:02:00.78

SQL>select object_id , object_name from t_4000;
.....
.....
....


629424 rows selected.

Elapsed: 00:15:03.40


Look at the run time . There is a huge difference between the first run and the second run.

Tuesday, November 3, 2009

Honey, I shrunk the table!

Let us say , you  have a table where there are frequent deletes / inserts  and your application reads this table ( via full table scan )  for processing the business results . You are  likely  to have performance issue , as Oracle has to read all the empty blocks till HWM .  In 10g and above , you can reset the HWM as below .

SQL> drop table t ;
Table dropped
SQL> create table t as select * from all_objects;
Table created
SQL> select sum(bytes) from user_segments where segment_name = 'T';
SUM(BYTES)
----------
99614720


SQL> select count(*) from t;
COUNT(*)
----------
629079


SQL> delete from t where owner ='SYS';
23157 rows deleted

SQL> select count(*) from t;
COUNT(*)
----------
605922


SQL> select sum(bytes) from user_segments where segment_name = 'T';
SUM(BYTES)
----------
99614720



Even after deleting few records , the space is not released.
But , when you shrink the table as below , you can reclaim the space.

 


SQL> alter table t ENABLE row movement ;
Table altered
SQL> alter table t shrink space compact;
Table altered
SQL> alter table t shrink space;
Table altered

SQL> select sum(bytes) from user_segments where segment_name = 'T';
SUM(BYTES)
----------
88932352

If you are not sure of which objects can be the candiates for shrinking , you can refer to the findings from segment advisor . Either you can use segment advisor from EM or PL /SQL . You can get the relevant sql commands as shown below .


Select * from

(
select c3 from table (dbms_space.asa_recommendations())
union ALL
select c2 from table (dbms_space.asa_recommendations())
union all
select c1 from table (dbms_space.asa_recommendations())
)
where c3 is not null

dbms_space.asa_recommendations()  provides the estimated space savings . I have asked the accuracy of this value to Tom Kyte in his forum . Please see his response at
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2049277600346543592#2049993400346447818

Please send in your comments.