http://goo.gl/EmKxy0

Archive

Archive for the ‘Hasso Plattner’ Category

Isolation Levels by example

December 28th, 2013 No comments

For Earlier post on Isolation Level please refer here.

Understanding Isolation Level “1” : Avoids Dirty Reads (Default isolation level for ASE)

Transaction T1 (Session 1)modifies a data item. Another transaction T2 (Session 2)then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed.

 Session1  Session 2   Remarks
1> select @@isolation “Isolation Level”
2>go
 Isolation Level
 —————
               1
(1 row affected)
1> insert into pmtmaster values(1,100)
2> go
1> begin tran
2> update pmtmaster set id2=200 where id1=1
3> go
1>
   

In Session1, updating the row with id2=1
 
 1> print “Session 2″
2> go               
Session 2           
1> select @@isolation “Isolation Level”
2> go
 Isolation Level
 —————
               1
(1 row affected)
1> select * from pmtmaster where id1=1
2> go
^C^C
[CanCan]
 In Isolation Level 1, that is default mode, we can not read dirty data as it is still not committed by other tran.
     So Isolation Level 1, avoids dirty reads.

Understanding Isolation Level “0”

Session1 Session 2 Remarks
1> select @@isolation “Isolation Level”
2>go
Isolation Level
—————
1
(1 row affected)
1> insert into pmtmaster values(1,100)
2> go
1> begin tran
2> update pmtmaster set id2=200 where id1=1
3> go
1>
   Same as Above , In Session1, updating the row with id2=1
 
1> print “Session 2″2> goSession 2
1> set transaction isolation level 0
2> go
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
0
(1 row affected)
1> begin tran
2> select * from pmtmaster where id1=1
2> go
id1         id2
———– ———–
1         200
(1 row affected)
 
 Now with isolation level 0 , I am trying to read data and it is allowing dirty reads.
 rollback    If Session 1  rollbacks, session will have inconsistent.

Understanding Isolation Level “2” : Avoid Repeatable Reads

What is Repeatable Reads?

Transaction T1 (session 1) reads a data item. Another transaction T2 (session 2) then modifies or
deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified                                      value or discovers  that the data item has been deleted.

Session1 Session 2 Remarks
 

1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
1
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 Transaction T1 (session 1) reads a data item.
 Session1 Continues..  

 1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
1> commit
2> go
 Another transaction T2 (session 2) then modifies.
 1> select * from pmtmaster where id2=200
2> go
 id1         id2
 ———– ———–
           1         200
(1 row affected)
 
 If T1 then attempts to reread the data item, it receives a modified value/different result set in same transaction. This is issue in repeatable reads. Lets review how can we avoid it.

How to avoid Repeatable Reads?

 

1> set transaction isolation level 2
2> go
1> select @@isolation “Isolation Level”
2> go
 Isolation Level
 —————
               2
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
 id1         id2
 ———– ———–
           1         200
           2         200
(2 rows affected)
To avoid Repeatable read problem, enable the isolation level 2
 Session1 Continues…
 1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
^C^C
[CanCan]
 Now Transaction T2 will not allow to modify the restult set which was read earlier tran 1(Session 1)
 1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 Still T1 will get same number of rows.

but still it has problem of Phantom Read?

Transaction T1 reads a set of data items satisfying some . Transaction T2 then creates data items that satisfy T1’s and commits. If T1 then repeats its read with the same ,  it gets a set of data items different from the first read.

1> begin tran
2> insert into pmtmaster values (3,200)
3> go
1> commit
2> go  
You can not modifies the result set but still you can insert new values and affect the result set.
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
2(1 row affected)
1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
3         200
(3 rows affected) 
 Now you can see the same session 1 is still returning different number of result set. This is called as Phantom Reads.
   
 To Avoid Phantom Reads enable isolation level 3 as below.

Understanding Isolation Level “3″ : Avoid Phantom Reads

 1> set transaction isolation level 3
2> go
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
3
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
id1         id2
———– ———–
1         200
2         200(2 rows affected)
To Avoid Phantom Reads , Lets enable isolation level 3
 
1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
^C^C
[CanCan] 
 Now you can not modify the result set.
  1> begin tran
2> insert into pmtmaster values (3,200)
3> go
^C^C
[CanCan]
 You can not create data items to affect the result set
 1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 All the time you will get same result set.

Need of In-memory Technology : SAP HANA

May 6th, 2013 No comments

Challenge 1: Massive Data Growth

Massive amounts of data is being created every year and as per he IDC EMC report data growth would be 40K Exabytes by 2020 :

http://germany.emc.com/collateral/about/news/idc-emc-digital-universe-2011-infographic.pdf

http://www.emc.com/collateral/analyst-reports/idc-the-digital-universe-in-2020.pdf

Capture 2

Challenge 2: Fast access to business decision making information.

Business & People want fast exact and correct answer of all questions from this massive amount of data.

Challenge 3: Current Technologies Can not deliver with this massive data growth.

Historical DBMS :

Historically database systems were designed to perform well on computer systems with limited RAM, this had the effect that slow disk I/O was the main bottleneck in data throughput. Consequently the architecture of these systems was designed with a focus on optimizing disk access, e. g. by minimizing the number of disk blocks (or pages) to be read into main memory when processing a query.

New Hardware Architecture ( up to or more 128 Cores of CPU and 2TB of RAM)

Computer architecture has changed in recent years. Now multi-core CPUs (multiple CPUs on one chip or in one package) are standard, with fast communication between processor cores enabling parallel processing. Main memory is no-longer a limited resource, modern servers can have 1 TB of system memory and this allows complete databases to be held in RAM. Currently server processors have up to 80 cores, and 128 cores will soon be available. With the increasing number of cores, CPUs are able to process increased data per time interval. This shifts the performance bottleneck from disk I/O to the data transfer between CPU cache and main memory

Hana1

Need of In-memory Technology SAP HANA :

From the discussion above it is clear that traditional databases might not use current hardware most efficiently and not able to fulfill current and future business need.

The SAP HANA database is a relational database that has been optimized to leverage state of the art hardware. It provides all of the SQL features of a standard relational database along with a feature rich set of analytical capabilities.

Using groundbreaking in-memory hardware and software, HANA can manage data at massive scale, analyze it at amazing speed, and give the business not only instant access to real time transactional information and analysis but also more flexibility. Flexibility to analyze new types of data in different ways, without creating custom data warehouses and data marts. Even the flexibility to build new applications which were not possible before.

HANA Database Features

Important database features of HANA include OLTP & OLAP capabilities, Extreme Performance, In-Memory , Massively Parallel Processing, Hybrid Database, Column Store, Row Store, Complex Event Processing, Calculation Engine, Compression, Virtual Views, Partitioning and No aggregates. HANA In-Memory Architecture includes the In-Memory Computing Engine and In-Memory Computing Studio for modeling and administration. All the properties need a detailed explanation followed by the SAP HANA Architecture.

Source : www,sap.com and emc and idc reports.

 

SAP D&T Academy Video: How to Rename Sybase ASE server

December 16th, 2012 No comments

Follow the below video for “How to Rename Sybase ASE server”

 

SAP’s Hasso Plattner : Father of SAP’s In Memory Technology HANA

May 30th, 2012 No comments

Sir Hasso Plattner is a cofounder of software giant SAP AG. Today he is Chairman of the Supervisory Board of SAP AG.

Plattner founded the Hasso Plattner Institute for software systems engineering based at the University of Potsdam, and in Palo Alto, California, its only source of funding being the non-profit Hasso Plattner Foundation for Software Systems Engineering.

You can see his Book : In-Memory Data Management An Inflection Point
for Enterprise Applications

You can read more about him and his book @ http://no-disk.com/

Sir Hasso’s Interview :
[youtube http://www.youtube.com/watch?v=W6S5hrPNr1E]

Finally SAP’s EVP of Database Technology Platform steve’s views on SAP Database Technology Roadmap.

See the Future of Sybase ASE with SAP Real time Data Platform:
[youtube http://www.youtube.com/watch?v=OReE_qu8zmI]

Happy Learning SAP Sybase !!

Source : Google.com, wikipedia,sap.com,youtube.com,no-disk.com