Challenge 1: Massive Data Growth
Massive amounts of data is being created every year and as per he IDC EMC report data growth would be 40K Exabytes by 2020 :
Challenge 2: Fast access to business decision making information.
Business & People want fast exact and correct answer of all questions from this massive amount of data.
Challenge 3: Current Technologies Can not deliver with this massive data growth.
Historical DBMS :
Historically database systems were designed to perform well on computer systems with limited RAM, this had the effect that slow disk I/O was the main bottleneck in data throughput. Consequently the architecture of these systems was designed with a focus on optimizing disk access, e. g. by minimizing the number of disk blocks (or pages) to be read into main memory when processing a query.
New Hardware Architecture ( up to or more 128 Cores of CPU and 2TB of RAM)
Computer architecture has changed in recent years. Now multi-core CPUs (multiple CPUs on one chip or in one package) are standard, with fast communication between processor cores enabling parallel processing. Main memory is no-longer a limited resource, modern servers can have 1 TB of system memory and this allows complete databases to be held in RAM. Currently server processors have up to 80 cores, and 128 cores will soon be available. With the increasing number of cores, CPUs are able to process increased data per time interval. This shifts the performance bottleneck from disk I/O to the data transfer between CPU cache and main memory
Need of In-memory Technology SAP HANA :
From the discussion above it is clear that traditional databases might not use current hardware most efficiently and not able to fulfill current and future business need.
The SAP HANA database is a relational database that has been optimized to leverage state of the art hardware. It provides all of the SQL features of a standard relational database along with a feature rich set of analytical capabilities.
Using groundbreaking in-memory hardware and software, HANA can manage data at massive scale, analyze it at amazing speed, and give the business not only instant access to real time transactional information and analysis but also more flexibility. Flexibility to analyze new types of data in different ways, without creating custom data warehouses and data marts. Even the flexibility to build new applications which were not possible before.
HANA Database Features
Important database features of HANA include OLTP & OLAP capabilities, Extreme Performance, In-Memory , Massively Parallel Processing, Hybrid Database, Column Store, Row Store, Complex Event Processing, Calculation Engine, Compression, Virtual Views, Partitioning and No aggregates. HANA In-Memory Architecture includes the In-Memory Computing Engine and In-Memory Computing Studio for modeling and administration. All the properties need a detailed explanation followed by the SAP HANA Architecture.
Source : www,sap.com and emc and idc reports.
Always consider to migrate the Development environment first , then UAT. Before moving to production Perform Regression testing on UAT enviornment.
Please consider to create the script to perform update stats,xp_postload(drop and re create index) for each and every database.
Steps for an ASE Database( You can repeat same steps for other databases) :
Step 1: Run the consistency checks in ASE database in Source (AIX) environment, to make sure that everything is fine.
Step 2: Put the database in single user mode.
Step3: Make sure there is no user activity on the Source Database .
Step 4: Run the sp_flushstats in the database.
Step 5: Take the backup of the database in Source (AIX) environment.
Step 6: Ftp the Files to Target environment. (AIX to Linux)
Step 7: Create and build the dataserver and databases in target Linux environment with exactly same configuration.
You might require to change some of the config param in Linux environment for performance point of view. ( Lets not discuss it here, as it is out of context).
Step 8: Also migrate the Login, roles from source server to target server
Step 9: Load the database in Linux environment.
(If there were user activity during dump process, load will be fail.)
Step 10: Online the database. If the target ASE version is new with source, It will also perform upgrade in this step.
Step 11: Fix the corrupt indexes using the xp_postload. If the Database size is more than 20G, try drop and re-create index , in this case xp_postload would not be effective.
Step 12: Update the stats on all tables.
Step 13: If there is replication setup in your environment, please setup replication after that.
1. If there is any user online during backup process, your load will fail( in the step for cross platform conversion).
2. After online database, we seen the -ve values in sp_helpdb output for few databases. There are two ways to fix this :
i) Try to run dbcc
dbcc usedextents(<DB name or DB ID>, 0, 1, 1)
ii) Use the Traceflag 7408 and 7409 in Run Server file and reboot the instance. It will not take much time as compare first option.
Traceflag 7408 : Force the server to scan *log segment* allocation pages; to recalculate free log space rather than use saved counts at boot time.
Traceflag 7409 : Force the server to scan *data* segment allocation pages; to recalculate free data page space rather than use saved counts at boot time.
Please let me know if you are planning for migration and need any assistance.
Wait is over Now !!
Please download the Complete ebook for SAP Sybase ASE Q&A Bank Version 1.0 as below:
For those who are familiar with earlier versions of Sybase IQ, here are the new features added to from SAP Sybase IQ 15.
Big data is a necessity at scale: if you’re trying to listen to every transatlantic phonecall, you need to use MapReduce. … if you need to search the entire internet in milliseconds you need to use MapReduce, if you need to run the largest social network in the world you need to use MapReduce. If you don’t you can probably scale with a database.
By default ASE doesn’t allow to lock last unlocked login that have sa_role/sso_role role. However, it can be locked if a role has explicit password set to enable the role while login.
If a role is altered to have a limitation on failed attempts, and a login attempts to enable the role and fails the required number of times, the role is locked for all holders of the role. Likewise, since we can’t explicitly lock the last unlocked login with sa_role or sso_role, it is possible for failed login attempts to indirectly lock that login.
Sybase DBA Commands: SybaseDBA_commands
Sybase DBA Notes: Sybase DBA ImportantNotes
Please let us know, if you have any query.
SAP HANA can be defined as an appliance that combines SAP software components optimized on hardware provided by SAP’s leading hardware partners. It comes in a bundle of softwares along with the hardware (servers). SAP HANA servers are sold in “t-shirt” sizes ranging from Extra-Small (128GB RAM) all the way up to Extra Large (>2TB RAM) with multicore CPUs.
SAP HANA is both a database (in the traditional sense) and a database platform (in the modern sense).
In its current form SAP HANA can be used for four basic types of use case:
1. Agile Data Mart (Stand-alone database for reporting)
2. SAP business suite accelerator (secondary database for SAP business suite for reporting, calculation and analysis purpose)
3. A primary database for SAP Netweaver warehouse.
4. A development platform for new applications
To store a table in memory, two option exists: 1. Row based storage and 2. Column based storage.
In row based storage, A table is stored as a sequence of records, i.e. one full row in a data page/or consecutive data pages. It means all columns values of a table stored sequentially per row.
In column based storage, column values of a column are stoger in contiguous memory location.
Advantages of column based table storage in following circumstances
1. Calculations are typically executed on a single column or few columns only.
2. Table is searched based on values of a few columns.
3. Table has large number of columns.
4. Table has large number of rows, so that columnar operations are required (aggregate, scan etc)
5. High compression rates can be achieved because the majority of the columns contain only few distinct values (compared to the number of rows)
Advantages of row based storage in following circumstances
1. The application needs to only process a single record at one time. (This applies to many selects and or updates of single record)
2. The application typically needs to access a complete record (or row)
3. The columns contain primarily distinct values so that compression rate would be low.
4. Neither aggregations nor fast searching is required.
5. The table has small number of rows (e.g. configuration tables)