http://goo.gl/EmKxy0

Archive

Archive for the ‘Start Sybase’ Category

Happy New Year !!!

January 2nd, 2014 No comments

Happy New Year 2014!!

In 2014, we’ll talk a lot about Sybase ASE on version 15.7.x.

We will also talk about data replication with SAP Sybase Replication Server, SAP Sybase IQ and SAP HANA.

Stay Tuned in new year !!

Team,
sybaseblog.com

Isolation Levels by example

December 28th, 2013 No comments

For Earlier post on Isolation Level please refer here.

Understanding Isolation Level “1″ : Avoids Dirty Reads (Default isolation level for ASE)

Transaction T1 (Session 1)modifies a data item. Another transaction T2 (Session 2)then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed.

 Session1  Session 2   Remarks
1> select @@isolation “Isolation Level”
2>go
 Isolation Level
 —————
               1
(1 row affected)
1> insert into pmtmaster values(1,100)
2> go
1> begin tran
2> update pmtmaster set id2=200 where id1=1
3> go
1>
   

In Session1, updating the row with id2=1
 
 1> print “Session 2″
2> go               
Session 2           
1> select @@isolation “Isolation Level”
2> go
 Isolation Level
 —————
               1
(1 row affected)
1> select * from pmtmaster where id1=1
2> go
^C^C
[CanCan]
 In Isolation Level 1, that is default mode, we can not read dirty data as it is still not committed by other tran.
     So Isolation Level 1, avoids dirty reads.

Understanding Isolation Level “0″

Session1 Session 2 Remarks
1> select @@isolation “Isolation Level”
2>go
Isolation Level
—————
1
(1 row affected)
1> insert into pmtmaster values(1,100)
2> go
1> begin tran
2> update pmtmaster set id2=200 where id1=1
3> go
1>
   Same as Above , In Session1, updating the row with id2=1
 
1> print “Session 2″2> goSession 2
1> set transaction isolation level 0
2> go
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
0
(1 row affected)
1> begin tran
2> select * from pmtmaster where id1=1
2> go
id1         id2
———– ———–
1         200
(1 row affected)
 
 Now with isolation level 0 , I am trying to read data and it is allowing dirty reads.
 rollback    If Session 1  rollbacks, session will have inconsistent.

Understanding Isolation Level “2″ : Avoid Repeatable Reads

What is Repeatable Reads?

Transaction T1 (session 1) reads a data item. Another transaction T2 (session 2) then modifies or
deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified                                      value or discovers  that the data item has been deleted.

Session1 Session 2 Remarks
 

1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
1
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 Transaction T1 (session 1) reads a data item.
 Session1 Continues..  

 1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
1> commit
2> go
 Another transaction T2 (session 2) then modifies.
 1> select * from pmtmaster where id2=200
2> go
 id1         id2
 ———– ———–
           1         200
(1 row affected)
 
 If T1 then attempts to reread the data item, it receives a modified value/different result set in same transaction. This is issue in repeatable reads. Lets review how can we avoid it.

How to avoid Repeatable Reads?

 

1> set transaction isolation level 2
2> go
1> select @@isolation “Isolation Level”
2> go
 Isolation Level
 —————
               2
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
 id1         id2
 ———– ———–
           1         200
           2         200
(2 rows affected)
To avoid Repeatable read problem, enable the isolation level 2
 Session1 Continues…
 1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
^C^C
[CanCan]
 Now Transaction T2 will not allow to modify the restult set which was read earlier tran 1(Session 1)
 1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 Still T1 will get same number of rows.

but still it has problem of Phantom Read?

Transaction T1 reads a set of data items satisfying some . Transaction T2 then creates data items that satisfy T1’s and commits. If T1 then repeats its read with the same ,  it gets a set of data items different from the first read.

1> begin tran
2> insert into pmtmaster values (3,200)
3> go
1> commit
2> go  
You can not modifies the result set but still you can insert new values and affect the result set.
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
2(1 row affected)
1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
3         200
(3 rows affected) 
 Now you can see the same session 1 is still returning different number of result set. This is called as Phantom Reads.
   
 To Avoid Phantom Reads enable isolation level 3 as below.

Understanding Isolation Level “3″ : Avoid Phantom Reads

 1> set transaction isolation level 3
2> go
1> select @@isolation “Isolation Level”
2> go
Isolation Level
—————
3
(1 row affected)
1> begin tran
2> select * from pmtmaster where id2=200
3> go
id1         id2
———– ———–
1         200
2         200(2 rows affected)
To Avoid Phantom Reads , Lets enable isolation level 3
 
1> begin tran
2> update pmtmaster set id2=300 where id1=2
3> go
^C^C
[CanCan] 
 Now you can not modify the result set.
  1> begin tran
2> insert into pmtmaster values (3,200)
3> go
^C^C
[CanCan]
 You can not create data items to affect the result set
 1> select * from pmtmaster where id2=200
2> go
id1         id2
———– ———–
1         200
2         200
(2 rows affected)
 
 All the time you will get same result set.

SAP Recognized as a Market Leader by Gartner, Inc. in Operational Database Management Systems Magic Quadrant

November 17th, 2013 No comments

As per Gartner:

SAP

Located in Walldorf, Germany, SAP (www.sap.com) has several DBMS products that are used for transaction systems: SAP Sybase Adaptive Server Enterprise (ASE), SAP Sybase iAnywhere and SAP Hana. Both ASE and iAnywhere are available as software only, while SAP Hana is marketed as an appliance.

Strengths
  • Vision leadership — Moving into DBMS technology, SAP has introduced SAP Hana as an in-memory platform for hybrid transaction/analytical processing (HTAP) and acquired Sybase to add to the DBMS product line.

  • Strong DBMS offerings — In addition to SAP Hana, SAP Sybase ASE continues to support global-scale applications and was first to introduce an in-memory DBMS (IMDBMS) version.

  • Performance — References cited performance (scalability and reliability) as a major strength (one of the highest scores), mostly for SAP Sybase ASE.

Source : http://global.sap.com/corporate-en/news.epx?category=ALL&articleID=21912&searchmode=C&page=1&pageSize=10

& http://www.gartner.com/technology/reprints.do?id=1-1MNA5V2&ct=131105&st=sb

 

 

SAP® Sybase® Adaptive Server® Enterprise Gains Momentum With Rapid Customer Adoption

November 17th, 2013 No comments

 

In less than 18 months since the offering’s release in April 2012, more than 1,000 customers have chosen to run SAP Business Suite on SAP Sybase ASE and there are more than 2,000 customer installations. Both new and existing SAP customers can run a high-performance relational database management system (RDBMS) optimized for SAP Business Suite that helps improve operational efficiency and significantly reduce overall costs. The announcement was made at the SAP Database and Technology Partner Summit in Barcelona.

Source :: http://www.prnewswire.com/news-releases/sap-sybase-adaptive-server-enterprise-gains-momentum-with-rapid-customer-adoption-229819941.html

http://www.hispanicbusiness.com/2013/11/5/sap_sybase_adaptive_server_enterprise_gains.htm

Need of In-memory Technology : SAP HANA

May 6th, 2013 No comments

Challenge 1: Massive Data Growth

Massive amounts of data is being created every year and as per he IDC EMC report data growth would be 40K Exabytes by 2020 :

http://germany.emc.com/collateral/about/news/idc-emc-digital-universe-2011-infographic.pdf

http://www.emc.com/collateral/analyst-reports/idc-the-digital-universe-in-2020.pdf

Capture 2

Challenge 2: Fast access to business decision making information.

Business & People want fast exact and correct answer of all questions from this massive amount of data.

Challenge 3: Current Technologies Can not deliver with this massive data growth.

Historical DBMS :

Historically database systems were designed to perform well on computer systems with limited RAM, this had the effect that slow disk I/O was the main bottleneck in data throughput. Consequently the architecture of these systems was designed with a focus on optimizing disk access, e. g. by minimizing the number of disk blocks (or pages) to be read into main memory when processing a query.

New Hardware Architecture ( up to or more 128 Cores of CPU and 2TB of RAM)

Computer architecture has changed in recent years. Now multi-core CPUs (multiple CPUs on one chip or in one package) are standard, with fast communication between processor cores enabling parallel processing. Main memory is no-longer a limited resource, modern servers can have 1 TB of system memory and this allows complete databases to be held in RAM. Currently server processors have up to 80 cores, and 128 cores will soon be available. With the increasing number of cores, CPUs are able to process increased data per time interval. This shifts the performance bottleneck from disk I/O to the data transfer between CPU cache and main memory

Hana1

Need of In-memory Technology SAP HANA :

From the discussion above it is clear that traditional databases might not use current hardware most efficiently and not able to fulfill current and future business need.

The SAP HANA database is a relational database that has been optimized to leverage state of the art hardware. It provides all of the SQL features of a standard relational database along with a feature rich set of analytical capabilities.

Using groundbreaking in-memory hardware and software, HANA can manage data at massive scale, analyze it at amazing speed, and give the business not only instant access to real time transactional information and analysis but also more flexibility. Flexibility to analyze new types of data in different ways, without creating custom data warehouses and data marts. Even the flexibility to build new applications which were not possible before.

HANA Database Features

Important database features of HANA include OLTP & OLAP capabilities, Extreme Performance, In-Memory , Massively Parallel Processing, Hybrid Database, Column Store, Row Store, Complex Event Processing, Calculation Engine, Compression, Virtual Views, Partitioning and No aggregates. HANA In-Memory Architecture includes the In-Memory Computing Engine and In-Memory Computing Studio for modeling and administration. All the properties need a detailed explanation followed by the SAP HANA Architecture.

Source : www,sap.com and emc and idc reports.

 

Migrating SAP Sybase ASE from AIX to Linux

May 1st, 2013 No comments

Always consider to migrate the Development environment first , then UAT. Before moving to production Perform Regression testing on UAT enviornment.

Please consider to create the script to perform update stats,xp_postload(drop and re create index) for each and every database.

Steps for an ASE Database( You can repeat same steps for other databases) :

Step 1: Run the consistency checks in ASE database in Source (AIX) environment, to make sure that everything is fine.

Step 2: Put the database in single user mode.

Step3: Make sure there is no user activity on the Source Database .

Step 4: Run the sp_flushstats in the database.

Step 5: Take the backup of the database in Source (AIX) environment.

Step 6: Ftp the Files to Target environment. (AIX to Linux)

Step 7: Create and build the dataserver and databases in target Linux environment with exactly same configuration.
You might require to change some of the config param in Linux environment for performance point of view. ( Lets not discuss it here, as it is out of context).

Step 8: Also migrate the Login, roles from source server to target server

Step 9: Load the database in Linux environment.
(If there were user activity during dump process, load will be fail.)

Step 10: Online the database. If the target ASE version is new with source, It will also perform upgrade in this step.

Step 11:  Fix the corrupt indexes using the xp_postload. If the Database size is more than 20G, try drop and re-create index , in this case xp_postload would not be effective.

Step 12: Update the stats on all tables.

Step 13:  If there is replication setup in your environment, please setup replication after that.

Issue Faced:

1. If there is any user online during backup process, your load will fail( in the step for cross platform conversion).

2. After online database, we seen the -ve values in sp_helpdb output for few databases. There are two ways to fix this :

i) Try to run dbcc

dbcc usedextents(<DB name or DB ID>, 0, 1, 1)

ii) Use the Traceflag  7408 and 7409 in Run Server file and reboot the instance. It will not take much time as compare first option.

Traceflag 7408 : Force the server to scan *log segment* allocation pages; to recalculate free log space rather than use saved counts at boot time.
Traceflag 7409 : Force the server to scan *data* segment allocation pages; to recalculate free data page space rather than use saved counts at boot time.

Please let me know if you are planning for migration and need any assistance.

How to lock last login with “sa_role” role

February 24th, 2013 No comments

By default ASE doesn’t allow  to lock last unlocked login that have sa_role/sso_role role. However, it can be locked if a role has explicit password set to enable the role while login.

If a role is altered to have a limitation on failed attempts, and a login attempts to enable the role and fails the required number of times, the role is locked for all holders of the role.  Likewise, since we can’t explicitly lock the last unlocked login with sa_role or sso_role, it is possible for failed login attempts to indirectly lock that login.

Top Five Reasons to Choose SAP Sybase ASE

February 13th, 2013 No comments

 

 

 

Categories: ASE, Database, HANA, SAP, Start Sybase Tags:

How to handle the database objects security in database through stored procedures

February 13th, 2013 No comments

Case: An user X owns a stored procedure that access a table owned by another user Y, and that table has a trigger that updates another table owned by user Z

Clumsy Solution –> Grant explicit permissions to user X on table1 and table2, similarly grant execute permissions and table access to users Y and Z. In a production environment think of many users available in a database and thousands of objects, it is not possible.

Good Solution –> Create all objects with dbo user and provide execute permission ONLY on stored procedures to user X, thus when user would execute the stored procedure, SP would be able to modify the table1 and trigger on table1 would be able to modify tabl2, though explicit access on tables have not been granted to user X. This is how stored procedure maintain the security features. User X shall not be able to access table1 and table2 explicity but stored procedure run by user X can modify those tables.

This is how objects gets created in production environment.

SAP SYbase ASE, SAP Sybase IQ and SAP HANA – In a snapshot

February 3rd, 2013 No comments
SAP Sybase ASE SAP Sybase IQ SAP HANA
High performance, reliable, scalable and resource-efficient OLTP database. Heavily optimize ASE for virtualized and cloud infrastructure    Aims to provide maximum performance and efficiency when running both OLTP and OLAP workloads on the SAME DATA and at the SAME TIME
Capability to handle transactional workloads. It is good to chose in case of budgetary constraints.   In- memory database, very high capability to handle transactional workloads.
SAP Sybase ASE is planned to be the preferred transactional database in the SAP Real-Time Data Platform for running SAP applications, SAP partner applications and custom applications    SAP HANA is planned to be the preferred database in the SAP Real-Time Data Platform for applications that need to run analytic and transactional workload on the same data at the same time 
  Sybase IQ is planned to focus on evolving columnar compression capabilities for effectively storing and processing data  SAP HANA is planned to leverage IP from Sybase IQ, such as in-database algorithms, Hadoop integration, index optimizations, and ELT capabilities for performing transforms in the Hana database 
  Plan to leverage Sybase IQ as near-line storage for SAP BW as well as leverage the technology in SAP ILM portfolio for archiving from the Business Suite Plan to leverage SAP HANA as database for native applications, SAP BW, as well as the Business Suite
  High-performance analytics server planned to deliver cost effectively for large volume data scenarios 

 Open platform for large scale data warehouse

 Analytical data mart for aged data in a BW+HANA+IQ scenario

 Cost effective data management scaling to extreme data volumes

 

 

 

 

 

Real-time platform for analytics and applications that simplified data management approaches to deliver real-time results 

 Operational / agile data mart for real-time scenarios

 Analytical data mart for complex real-time calculations

 Database for BW powered by SAP HANA