Showing posts with label Datatype mismatch. Show all posts
Showing posts with label Datatype mismatch. Show all posts

Thursday, July 17, 2014

Compatibility Issues with MS Access and SQL Server


In most organizations , Power Users use MS Access to query the data from other RDBMS (via ODBC) for their reporting needs. 
For this approach , their learning curve is minimal ; the downside is that there are few compatibility issues ( with data types) ; Performance , and so on.

In my view , it is better that they should reporting tools that takes advantage of RDBMS in a native way. 

Let us look at one of my compatibility issues . Not all datatypes translates nicely in MS-Access. 
The datatypes  'datetime2' and 'bigint' are some of them. 

Here is an example . 

Created a table called 'DimDate' with one of the column as 'bigint'. 

a) Here is the Structure of the table in SQL Server via SSMS ( SQL Server Management Studio) 



b) Here is the data viewed in SQL Server via SSMS. 





I have linked this table in MS Access . 

c) Structure of the table in  MS-Access 



d) Data viewed in MS-Access 





Changed the data to int; and re-linked the table in MS Access. 


f) Structure of the table in SQL Server via SSMS 



g) Structure of the table in  MS-Access 



h) Data viewed in MS-Access 



Apart from the datatypes issues , there are performance issues especially , when you link a view in MS-Access ,  it offloads all the data in the underlying base table  to MS-Access and performs the filter criteria at the client , which is not the best way to do queries. 

Comments welcome.

Wednesday, November 4, 2009

Size DOES Matter

We should rightly size the column definition . I have seen in various implemantations ,
defining right size , right data type  has taken back seat .

The wrong size would impact data integrity  , memory usage , network traffic and
runtime performance.

Let us take a look at runtime performance .

I am a creating two tables from the same source . The only difference is that the second
column varies by size .


SQL> desc t_normal
Name        Type         Nullable Default Comments
----------- ------------ -------- ------- --------
OBJECT_ID   NUMBER                                
OBJECT_NAME VARCHAR2(30)                          

SQL> desc t_4000
Name        Type           Nullable Default Comments
----------- -------------- -------- ------- --------
OBJECT_ID   NUMBER                                  
OBJECT_NAME VARCHAR2(4000)                          

SQL> truncate table t_normal;

Table truncated

SQL> truncate table t_4000;

Table truncated

SQL> insert /*+ APPEND */ into t_normal select object_id , object_name from all_objects
  2  ;

629424 rows inserted

SQL> insert /*+ APPEND */ into t_4000 select object_id , object_name from all_objects;

629424 rows inserted

SQL> commit;

Commit complete

Let us set the arraysize to 100 and time it .


SQL> set arraysize 100
SQL> set timing on

SQL> exec dbms_stats.gather_table_stats(user , 'T_NORMAL');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09

SQL> exec dbms_stats.gather_table_stats(user , 'T_4000');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10

SQL>select object_id , object_name from t_normal;
......
......
.....


629424 rows selected.

Elapsed: 00:02:00.78

SQL>select object_id , object_name from t_4000;
.....
.....
....


629424 rows selected.

Elapsed: 00:15:03.40


Look at the run time . There is a huge difference between the first run and the second run.