Posts Tagged ‘News’

ASE15 New Features – II : DBA Perspective

September 15th, 2012 No comments

Source :   Sybase Resources on www & 

In 2009 , I posted ASE 15 New Features -1 ,  Now recently ASE has released ASE15.7  ESD-2 and these are new features in recent versions;

Happy Reading and Enjoy !


ASE 15.7 ESD#2 New Features

• Automatic compressed Share Memory dump
• In-Row Large Object Compression
• create database Asynchronously
• Shared Query Plans
• User-Defined Optimization Goal
• Expanded Maximum Database Size
• alter table drop column without datacopy
• Enhancements to dump and load : Dump Configuration, History.
• Hash-Based Update Statistics
• Concurrent dump database and dump transaction Commands
• Fast-Logged Bulk Copy
• Enhancements to show_cached_plan_in_xml
• Merging, Splitting & Moving Partitions
• Non blocking Reorg
• Deferred Table Creation
• Granular Permissions & Predicate Priviliages
ASE 15.7 New Features

• Application Functionality Configuration Group
• ASE Thread-Based Kernel: The ASE kernel us now thread-based instead or process-based
• Data Compression : Use less storage space for the same amount of data, reduce cache memory consumption and improve performance because of lower I/O demands
• New Security Features: End-to-end CIS Kerberos authentication, dual control of encryption keys and unattended startup, secure logins, roles and password management and login profiles
• Abstract Plans in Cached Statements: Abstract plan information can be saved in statement cache
• Shrink Log Space: Allows you to shrink the log space and free storage without re-creating the database using the alter database command to remove unwanted portions of a database log
• Display Currently Set Switches: Allows visibility of all traceflags at the server and session level
• Changes for Large Objects: Includes storing in-row LOB columns for small text, image and unitext datatypes, storing declared SQL statements containing LOBs, indirectly referencing a LOB in T-SQL statements, and allows checking for null values of large objects
• Showing Cached Plans in XML: Allows showplan output in XML for a statement in cache
• Padding a Character Field Using str: Fields can be padded with a specified character or numeric
• Changes to select for update: Allows select for update command to exclusively lock rows for subsequent updates within the same transactio and for updatable cursors
• Creation of non-materialized, non-NULL columns
• Sharing Inline Defaults: Allows sharing inline defaults between different tables in the same db
• Monitoring data is retained to improve query performance
• Dynamic parameters can be analyzed before running a query to avoid inefficient query plans
• Monitor Lock Timeouts
• Enable and disable truncation of trailing zeros from varbinary and binary null data
• Full Recoverable DDL: Use dump transaction to fully recover the operations that earlier versions of Adaptive Server minimally logged
• Transfer Rows from Source to Target Table Using merge.
• View Statistics and Histograms with sp_showoptstats: Allow you to extract and display, in an XML document, statistics and histograms for various types of data objects from system tables
• Changes to Cursors: Changes to how cursors lock, manage trnasactions and are declared
• Nested select Statement Enhancements: Expands the abilities of the asterisk (*)
• Some system procedures can run in sessions that use chained transaction mode
• Expanded Variable-Length Rows: Redefines data-only locked (DOL) columns to use a row offset of upto 32767 bytes. Requires a logical page size of 16K to create wide, variable-length DOL rows.
• Like Pattern Matching: Treat square brackets individually in the like pattern-matching algorithm
• Quoted Identifiers: Use quoted identifiers for tables, views, column names, index names and system procedure parameters
• Allow Unicode Noncharacters: Enable permissive unicode configuration parameter, which is a member of enable functionality group, allows you to ignore Unicode noncharacters
• Reduce Query Processing Latency: Enables multiple client connections to reuse or share dynamic SQL lightweight procedures (LWPs)
• The sybdiag Utility: A new Java-based tool that collects comprehensive ASE configuraiton and environment data for use by Sybase Technical Support
• The optimizer Diagnostic Utility: Adds the sp_opt_querystats system procedure, which allows you to analyze the query plan generated by the optimizer and the factors that influenced its choice of a query plan

ASE 15.5 New Features

• In-memory databases provide improved performance by operating entirely in-memory and not reading/writing transactions to disk.
• Relaxed-durability for disk-resident databases delivers enhanced performance by eliminating committed transactions.
• “dump database” and “load database” functionality is provided for both in-memory and relaxed-durability databases.
• Faster compression for backups is provided by two new compression options (level 100 and 101).
• Backup Server support is now available for IBM’s Tivoli Storage Manager.
• Deferred name resolution allows the creation of stored procedures before the referenced objects are created in the database.
• FIPS 140-2 encryption is now provided for login passwords that are transmitted, stored in memory or stored on disk.
• Incremental Data Transfer allows exporting specific rows, based on either updates since the last transfer or by selected rows for an output file, and does so without blocking ongoing reads and updates.
• The new bigdatetime and bigtime datatypes provide microsecond level precision.
• You can now create and manage user-created tempdb groups, in addition to the default tempdb group.
• The new monTableTransfer table provides historical transfer information for tables.
• The new system table, spt_TableTransfer, stores results from table transfers.
• The sysdevices table has been modified to list the in-memory storage cache under the “name” and “phyname” columns.
• Auditing options have been added to support in-memory and relaxed-durability databases, incremental data transfer, and deferred name resolution.

ASE15 New Features – I :

New features in SAP Sybase ASE 15.7 ESD#2

August 23rd, 2012 No comments

Really very Good and useful new features in Sybase ASE 15.7 ESD#2, Checked Out in Rob’s Blog :


A few weeks ago (early August 2012), ASE 15.7 ESD2 was released. Despite being an ESD, this ASE release actually contains a truckload of great new features as well. You should really check these out since some of these are just so useful… Let me just highlight a selected few that I like particularly (I hope to come back to some of these topics in the time ahead).

* larger database size – First, the maximum size of an ASE database has been doubled as of 15.7 ESD2. This means that with a 2KB or 16KB page size, the maximum size of a single ASE database is now 8TB or 64TB, respectively (and you can still have thousands of such databases in a single ASE server).
* async database creation
* split/merge partition
* non-blocking table rebuild
* faster update statistics
* faster index creation
* query performance
* Materialized Views
* security & permission

Source :

SAP’s Chen Challenges Oracle, IBM With Database Advances

August 23rd, 2012 No comments

SAP AG (SAP), chasing Oracle (ORCL) Corp. and International Business Machines Corp. (IBM) in the $26.7 billion database market, is betting on new product features to gain ground, said the man who helped build the company’s arsenal.

Read Full Story @

Categories: ASE, HANA, News, SAP Tags: , , , ,

Freudenberg IT Creates a Center of Excellence for SAP HANA® and SAP® Sybase ASE

July 11th, 2012 No comments

Freudenberg IT (FIT) announced today the creation of its Advanced Databases Center of Excellence (COE). The Advanced Databases COE will allow customers to leverage the business advantages of the latest SAP® database platforms, including SAP HANA® and SAP® Sybase Adaptive Server® Enterprise (SAP Sybase ASE),while minimizing the cost, risk, and complexity of a database platform change.
Read more:

Categories: ASE, HANA, News, SAP Tags: , , , ,

Sybase ASE Database: Cost Considerations and Advantages

June 22nd, 2012 No comments

Recently SAP contracted with IDC to conduct a study of relational database management (RDBMS) users to determine the cost factors encountered by those users in running a relational database, and the extent to which (if any) Sybase ASE could save users money in running their systems. IDC recruited 12 organizations that were willing to let us examine deeply their costs in running both Sybase ASE and other RDBMS products.

IDC asked a number of detailed questions regarding the organizations’ use of database software, their staffing costs, their hardware costs, and their software license and maintenance costs. IDC then analyzed the data using a well-established five-year model for calculating total cost of ownership (TCO), and came to some compelling conclusions.

Read More :

Categories: ASE, News Tags: , , , ,

SAP’s Hasso Plattner : Father of SAP’s In Memory Technology HANA

May 30th, 2012 No comments

Sir Hasso Plattner is a cofounder of software giant SAP AG. Today he is Chairman of the Supervisory Board of SAP AG.

Plattner founded the Hasso Plattner Institute for software systems engineering based at the University of Potsdam, and in Palo Alto, California, its only source of funding being the non-profit Hasso Plattner Foundation for Software Systems Engineering.

You can see his Book : In-Memory Data Management An Inflection Point
for Enterprise Applications

You can read more about him and his book @

Sir Hasso’s Interview :

Finally SAP’s EVP of Database Technology Platform steve’s views on SAP Database Technology Roadmap.

See the Future of Sybase ASE with SAP Real time Data Platform:

Happy Learning SAP Sybase !!

Source :, wikipedia,,,

A SQL GURU trapped in a SAP Sybase IQ World :AntonS

May 23rd, 2012 No comments

It was with a lot of trepidation when I sat down to write this post as I am a MS SQL Server DBA/PROGRAMMER by trade, experience and choice. Having started out on SQL Server 6.5 many many many years ago, I have always been loyal to the database I had come to know and trust. Please do not get me wrong, I do not plan to bash SQL Server as I still use it on a daily basis, this post is merely my take on the comparison of 2 completely different database products….  – AntonS

Read Full Story @


Categories: ASE, News, Sybase IQ Server Tags: , , ,

DBA Sidekick(Sybase) Andriod App on Google Play

April 25th, 2012 No comments

Source : Link

Key Features:

1. Search Sybase Error Code and Adaptive Server Anywhere SQLCODEs to find error description
2. Partial search available
3. Negative error code need not be supplied for Adaptive Server Anywhere SQLCODEs

Categories: ASE, News Tags: , ,

SAP Unveils Unified Strategy for Real-Time Data Management to Grow Database Market Leadership!!!

April 10th, 2012 No comments

Finally the curtain is up :

SAP today provided the following road map details and areas of strategic innovation and investment of its database portfolio to increase its database market leadership by 2015:

  • SAP HANA platform: This state-of-the-art in-memory platform is planned to be the core of the SAP real-time data platform, offering extreme performance and innovation for next-generation applications.
  • SAP Sybase ASE: SAP Sybase ASE is intended as a supported option for SAP Business Suite applications while SAP HANA is planned to augment the extreme transactions of SAP Sybase ASE with real-time reporting capabilities.
  • SAP® Sybase IQ® server: SAP Sybase IQ is planned to deliver data management for “big data” analytics, offering extreme total cost of ownership (TCO). Progressive integration with SAP HANA is intended to provide a smart store for aged/cold data. SAP Sybase IQ is envisioned to share common capabilities and life-cycle management with the SAP HANA platform.
  • SAP® Sybase® SQL Anywhere: This market-leading mobile and embedded database with millions of deployments is planned to be the front-end database for the SAP HANA platform, extending its reach to mobile and embedded applications in real time.
  • SAP® Sybase® PowerDesigner software: This flagship data modeling, information architecture and orchestration software is envisioned to become the foundation of the modeling solution for the SAP real-time data platform, offering a large base of experts to customers. Ford Motor Company recently selected the software to drive its data modeling and management and centralize all logical and physical modeling functions.
  • SAP® Sybase® Event Stream Processor (ESP) software, SAP® Sybase® Replication Server and SAP solutions for EIM: Combined, these offerings are intended to provide data assessment and integration of batch, real-time change data capture and streaming data into the SAP real-time data platform.
  • SAP real-time data platform integrated with Hadoop: SAP HANA and SAP Sybase IQ are planned to extend support for accessing “big data” sources such as Hadoop, and offer a deeply integrated pre-processing infrastructure

Source ::

Petabyte Size Data Store Managed by Hadoop & Map Reduce.

February 11th, 2012 No comments


Source : & www.
Today, we’re surrounded by data. People upload videos, take pictures on their cell phones, text friends, update their Facebook status, leave comments around the web, click on ads, and so forth. Machines, too, are generating and keeping more and more data. You may even be reading this book as digital data on your computer screen, and certainly your purchase of this book is recorded as data with some retailer.

The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, and Microsoft. They needed to go through terabytes and petabytes of data to figure out which websites were popular, what books were in demand, and what kinds of ads appealed to people. Existing tools were becoming inadequate to process such large data sets. Google was the first to publicize MapReduce—a system they had used to scale their data processing needs.

This system aroused a lot of interest because many other businesses were facing similar scaling challenges, and it wasn’t feasible for everyone to reinvent their own proprietary tool. Doug Cutting saw an opportunity and led the charge to develop an open source version of this MapReduce system called Hadoop . Soon after, Yahoo and others rallied around to support this effort.

What is Hadoop ?

Hadoop is an open source framework for writing and running distributed applications that process large amounts of data. Distributed computing is a wide and varied field, but the key distinctions of Hadoop are that it is

1.Accessible—Hadoop runs on large clusters of commodity machines or on cloud computing services such as Amazon’s Elastic Compute Cloud (EC2 ).
2.Robust—Because it is intended to run on commodity hardware, Hadoop is architected with the assumption of frequent hardware malfunctions. It can gracefully handle most such failures.
3.Scalable—Hadoop scales linearly to handle larger data by adding more nodes to the cluster.
4.Simple—Hadoop allows users to quickly write efficient parallel code.

Comparing SQL databases and Hadoop:

Hadoop is a framework for processing data, what makes it better than standard relational databases, the workhorse of data processing in most of today’s applications? One reason is that SQL (structured query language) is by design targeted at structured data. Many of Hadoop’s initial applications deal with unstructured data such as text. From this perspective Hadoop provides a more general paradigm than SQL.
For working only with structured data, the comparison is more nuanced. In principle, SQL and Hadoop can be complementary, as SQL is a query language which can be implemented on top of Hadoop as the execution engine.3 But in practice, SQL databases tend to refer to a whole set of legacy technologies, with several dominant vendors, optimized for a historical set of applications. Many of these existing commercial databases are a mismatch to the requirements that Hadoop targets.
Some Implementation of Hadoop for production purpose :

Complete List @

Sybase IQ
Sybase IQ :


532 nodes cluster (8 * 532 cores, 5.3PB).
Heavy usage of Java MapReduce, Pig, Hive, HBase
Using it for Search optimization and Research.


We use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning.

Currently we have 2 major clusters:

A 1100-machine cluster with 8800 cores and about 12 PB raw storage.
A 300-machine cluster with 2400 cores and about 3 PB raw storage.
Each (commodity) node has 8 cores and 12 TB of storage.
We are heavy users of both streaming as well as the Java APIs. We have built a higher level data warehousing framework using these features called Hive (see the We have also developed a FUSE implementation over HDFS.


We have multiple grids divided up based upon purpose. * Hardware:
120 Nehalem-based Sun x4275, with 2×4 cores, 24GB RAM, 8x1TB SATA
580 Westmere-based HP SL 170x, with 2×4 cores, 24GB RAM, 6x2TB SATA
1200 Westmere-based SuperMicro X8DTT-H, with 2×6 cores, 24GB RAM, 6x2TB SATA
CentOS 5.5 -> RHEL 6.1
Sun JDK 1.6.0_14 -> Sun JDK 1.6.0_20 -> Sun JDK 1.6.0_26
Apache Hadoop 0.20.2+patches -> Apache Hadoop 0.20.204+patches
Pig 0.9 heavily customized
Azkaban for scheduling
Hive, Avro, Kafka, and other bits and pieces…


We use Hadoop to store and process tweets, log files, and many other types of data generated across Twitter. We use Cloudera’s CDH2 distribution of Hadoop, and store all data as compressed LZO files.

We use both Scala and Java to access Hadoop’s MapReduce APIs
We use Pig heavily for both scheduled and ad-hoc jobs, due to its ability to accomplish a lot with few statements.
We employ committers on Pig, Avro, Hive, and Cassandra, and contribute much of our internal Hadoop work to opensource (see hadoop-lzo)
For more on our use of Hadoop, see the following presentations: Hadoop and Pig at Twitter and Protocol Buffers and Hadoop at Twitter


More than 100,000 CPUs in >40,000 computers running Hadoop
Our biggest cluster: 4500 nodes (2*4cpu boxes w 4*1TB disk & 16GB RAM)
Used to support research for Ad Systems and Web Search
Also used to do scaling tests to support development of Hadoop on larger clusters
Our Blog – Learn more about how we use Hadoop.
>60% of Hadoop Jobs within Yahoo are Pig jobs.



Categories: News, Sybase IQ Server Tags: , ,