http://goo.gl/EmKxy0

Archive

Archive for the ‘News’ Category

SAP’s Hasso Plattner : Father of SAP’s In Memory Technology HANA

May 30th, 2012 No comments

Sir Hasso Plattner is a cofounder of software giant SAP AG. Today he is Chairman of the Supervisory Board of SAP AG.

Plattner founded the Hasso Plattner Institute for software systems engineering based at the University of Potsdam, and in Palo Alto, California, its only source of funding being the non-profit Hasso Plattner Foundation for Software Systems Engineering.

You can see his Book : In-Memory Data Management An Inflection Point
for Enterprise Applications

You can read more about him and his book @ http://no-disk.com/

Sir Hasso’s Interview :
[youtube http://www.youtube.com/watch?v=W6S5hrPNr1E]

Finally SAP’s EVP of Database Technology Platform steve’s views on SAP Database Technology Roadmap.

See the Future of Sybase ASE with SAP Real time Data Platform:
[youtube http://www.youtube.com/watch?v=OReE_qu8zmI]

Happy Learning SAP Sybase !!

Source : Google.com, wikipedia,sap.com,youtube.com,no-disk.com

Useful Sybase ASE Tools available for free download…

May 24th, 2012 No comments

I have started some time ago to distribute some tools I have created to assist me in my professional life as full-time production dba/sybase migration analyst/sybase database architect.

Some are years old, others are very fresh (the last developed two days ago).  There are additional tools that I will publish in the near future.  The aim of all of the tools is to make the life of a DBA/Analyst/Architect a bit easier.  Although I do by no means claim the position of the ASE tool guru – there are other tools available out there, some pretty costly.  The tools may be of some use for those having a particular need in view and lacking an alternative.

Not to repeat myself, I provide you the link to the description of each particular tool here.

There are so far 4 tools:  1st is aimed to use/learn the MDA tables better.  2nd to create pin-pointed monitoring of an ASE server a thing for novices, 3d aimed to make parsing of Sybase Sysmon reports a bit more convenient, and the last to be able to reuse all those handy queries DBAs accumulate over years with more ease.

Below are the tools, with screen-shots  & links.

1.  ASE Monitoring (MDA) Tables Assistant

2.  A Simple Graphical Monitor for Sybase ASE

3.  ASE System Report (Sysmon) Parser

4.  ASE Frequent QueryAssistant

You are welcome to use them and post comments.

ps.  I wish to thank the hosts of the present blog for the opportunity to use it to make the tools available to a greater public.

It was my pleasure to share.

Andrew T.M.

A SQL GURU trapped in a SAP Sybase IQ World :AntonS

May 23rd, 2012 No comments

It was with a lot of trepidation when I sat down to write this post as I am a MS SQL Server DBA/PROGRAMMER by trade, experience and choice. Having started out on SQL Server 6.5 many many many years ago, I have always been loyal to the database I had come to know and trust. Please do not get me wrong, I do not plan to bash SQL Server as I still use it on a daily basis, this post is merely my take on the comparison of 2 completely different database products….  – AntonS

Read Full Story @ http://www.biitb.com/?p=235

 

Categories: ASE, News, Sybase IQ Server Tags: , , ,

calling SAP Sybase Experts..

May 16th, 2012 No comments

Dear All,

As 2011 has been closed and 2012 has started, we wanted to take the opportunity to thank each of you for your contribution and effort for sybaseblog.com  during the past few years. Despite the difficult circumstances, there is much we have achieved and is much to be proud of.

Our ongoing efforts are crucial to improve the knowledge sharing and Sybase excellence.

I would like to invite you all to contribute focus and determination to create a SAP Sybase Database community by leveraging http://sybaseblog.com SAP Database knowledge sharing blog.

We should be under no doubt that we have the skills, expertise and most importantly attitude to address the real time challenges using Sybase technologies in conjunction with SAP Sybase platform.

If you are SAP HANA ,Sybase ASE/Rep/IQ/ASA expert/learner and would like the part of sybaseblog.com :A Blog for SAP Database Technologies, you are most welcome.

We will hear you : sybanva@gmail.com.

Happy Sybase Learning !

 

Team, Sybaseblog.com.

 

DBA Sidekick(Sybase) Andriod App on Google Play

April 25th, 2012 No comments

Source : Link

Description
Key Features:

1. Search Sybase Error Code and Adaptive Server Anywhere SQLCODEs to find error description
2. Partial search available
3. Negative error code need not be supplied for Adaptive Server Anywhere SQLCODEs

Categories: ASE, News Tags: , ,

SAP Real Time Data Platform

April 20th, 2012 No comments

Guys,

 

Please see the SAP Real Time Data Platform from Sybase Point of View ::

 

Link ::http://www.sybase.com/detail?id=1098149

Slides : http://www.sybase.com/files/Product_Overviews/DM-Vision-and-Strategies-SAP-Next-Gen-DW.pdf

 

Categories: ASE, News Tags:

SAP Unveils Unified Strategy for Real-Time Data Management to Grow Database Market Leadership!!!

April 10th, 2012 No comments

Finally the curtain is up :

SAP today provided the following road map details and areas of strategic innovation and investment of its database portfolio to increase its database market leadership by 2015:

  • SAP HANA platform: This state-of-the-art in-memory platform is planned to be the core of the SAP real-time data platform, offering extreme performance and innovation for next-generation applications.
  • SAP Sybase ASE: SAP Sybase ASE is intended as a supported option for SAP Business Suite applications while SAP HANA is planned to augment the extreme transactions of SAP Sybase ASE with real-time reporting capabilities.
  • SAP® Sybase IQ® server: SAP Sybase IQ is planned to deliver data management for “big data” analytics, offering extreme total cost of ownership (TCO). Progressive integration with SAP HANA is intended to provide a smart store for aged/cold data. SAP Sybase IQ is envisioned to share common capabilities and life-cycle management with the SAP HANA platform.
  • SAP® Sybase® SQL Anywhere: This market-leading mobile and embedded database with millions of deployments is planned to be the front-end database for the SAP HANA platform, extending its reach to mobile and embedded applications in real time.
  • SAP® Sybase® PowerDesigner software: This flagship data modeling, information architecture and orchestration software is envisioned to become the foundation of the modeling solution for the SAP real-time data platform, offering a large base of experts to customers. Ford Motor Company recently selected the software to drive its data modeling and management and centralize all logical and physical modeling functions.
  • SAP® Sybase® Event Stream Processor (ESP) software, SAP® Sybase® Replication Server and SAP solutions for EIM: Combined, these offerings are intended to provide data assessment and integration of batch, real-time change data capture and streaming data into the SAP real-time data platform.
  • SAP real-time data platform integrated with Hadoop: SAP HANA and SAP Sybase IQ are planned to extend support for accessing “big data” sources such as Hadoop, and offer a deeply integrated pre-processing infrastructure

Source :: http://www.sap.com/corporate-en/press/newsroom/press-releases/press.epx?pressid=18621

Structural changes at SAP presage major database push

April 6th, 2012 No comments

When SAP bought Sybase back in 2010, it appeared that the main target was its mobility software. Sybase’s core and original business – database – was widely perceived as a ‘nice to have.’ It was making money and it had some very loyal customers – and a good position in financial services, a vertical SAP is targeting for growth. But the mobility piece was the jewel in the crown: everyone wanted mobile-enabled SAP (or was expected to).

 

by Philip Carnelley

 

Full Story  & Source :: http://blog.pac-online.com/2012/04/structural-changes-at-sap-presage-major-database-push/

Categories: ASE, News Tags:

Petabyte Size Data Store Managed by Hadoop & Map Reduce.

February 11th, 2012 No comments

Hadoop
——–

Source : http://hadoop.apache.org/ & www.
Today, we’re surrounded by data. People upload videos, take pictures on their cell phones, text friends, update their Facebook status, leave comments around the web, click on ads, and so forth. Machines, too, are generating and keeping more and more data. You may even be reading this book as digital data on your computer screen, and certainly your purchase of this book is recorded as data with some retailer.

The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, and Microsoft. They needed to go through terabytes and petabytes of data to figure out which websites were popular, what books were in demand, and what kinds of ads appealed to people. Existing tools were becoming inadequate to process such large data sets. Google was the first to publicize MapReduce—a system they had used to scale their data processing needs.

This system aroused a lot of interest because many other businesses were facing similar scaling challenges, and it wasn’t feasible for everyone to reinvent their own proprietary tool. Doug Cutting saw an opportunity and led the charge to develop an open source version of this MapReduce system called Hadoop . Soon after, Yahoo and others rallied around to support this effort.

What is Hadoop ?
————–

Hadoop is an open source framework for writing and running distributed applications that process large amounts of data. Distributed computing is a wide and varied field, but the key distinctions of Hadoop are that it is

1.Accessible—Hadoop runs on large clusters of commodity machines or on cloud computing services such as Amazon’s Elastic Compute Cloud (EC2 ).
2.Robust—Because it is intended to run on commodity hardware, Hadoop is architected with the assumption of frequent hardware malfunctions. It can gracefully handle most such failures.
3.Scalable—Hadoop scales linearly to handle larger data by adding more nodes to the cluster.
4.Simple—Hadoop allows users to quickly write efficient parallel code.

Comparing SQL databases and Hadoop:
————————————

Hadoop is a framework for processing data, what makes it better than standard relational databases, the workhorse of data processing in most of today’s applications? One reason is that SQL (structured query language) is by design targeted at structured data. Many of Hadoop’s initial applications deal with unstructured data such as text. From this perspective Hadoop provides a more general paradigm than SQL.
For working only with structured data, the comparison is more nuanced. In principle, SQL and Hadoop can be complementary, as SQL is a query language which can be implemented on top of Hadoop as the execution engine.3 But in practice, SQL databases tend to refer to a whole set of legacy technologies, with several dominant vendors, optimized for a historical set of applications. Many of these existing commercial databases are a mismatch to the requirements that Hadoop targets.
Some Implementation of Hadoop for production purpose :
——————————————————

Complete List @ http://wiki.apache.org/hadoop/PoweredBy

Sybase IQ
———
Sybase IQ : http://www.computerworld.com/s/article/9221355/Updated_Sybase_IQ_supports_Hadoop_MapReduce_Big_Data_

EBay
—-

532 nodes cluster (8 * 532 cores, 5.3PB).
Heavy usage of Java MapReduce, Pig, Hive, HBase
Using it for Search optimization and Research.

Facebook
——-

We use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning.

Currently we have 2 major clusters:

A 1100-machine cluster with 8800 cores and about 12 PB raw storage.
A 300-machine cluster with 2400 cores and about 3 PB raw storage.
Each (commodity) node has 8 cores and 12 TB of storage.
We are heavy users of both streaming as well as the Java APIs. We have built a higher level data warehousing framework using these features called Hive (see the http://hadoop.apache.org/hive/). We have also developed a FUSE implementation over HDFS.

LinkedIn
———

We have multiple grids divided up based upon purpose. * Hardware:
120 Nehalem-based Sun x4275, with 2×4 cores, 24GB RAM, 8x1TB SATA
580 Westmere-based HP SL 170x, with 2×4 cores, 24GB RAM, 6x2TB SATA
1200 Westmere-based SuperMicro X8DTT-H, with 2×6 cores, 24GB RAM, 6x2TB SATA
Software:
CentOS 5.5 -> RHEL 6.1
Sun JDK 1.6.0_14 -> Sun JDK 1.6.0_20 -> Sun JDK 1.6.0_26
Apache Hadoop 0.20.2+patches -> Apache Hadoop 0.20.204+patches
Pig 0.9 heavily customized
Azkaban for scheduling
Hive, Avro, Kafka, and other bits and pieces…

Twitter
——–

We use Hadoop to store and process tweets, log files, and many other types of data generated across Twitter. We use Cloudera’s CDH2 distribution of Hadoop, and store all data as compressed LZO files.

We use both Scala and Java to access Hadoop’s MapReduce APIs
We use Pig heavily for both scheduled and ad-hoc jobs, due to its ability to accomplish a lot with few statements.
We employ committers on Pig, Avro, Hive, and Cassandra, and contribute much of our internal Hadoop work to opensource (see hadoop-lzo)
For more on our use of Hadoop, see the following presentations: Hadoop and Pig at Twitter and Protocol Buffers and Hadoop at Twitter

Yahoo!
——–

More than 100,000 CPUs in >40,000 computers running Hadoop
Our biggest cluster: 4500 nodes (2*4cpu boxes w 4*1TB disk & 16GB RAM)
Used to support research for Ad Systems and Web Search
Also used to do scaling tests to support development of Hadoop on larger clusters
Our Blog – Learn more about how we use Hadoop.
>60% of Hadoop Jobs within Yahoo are Pig jobs.

 

Data_

Categories: News, Sybase IQ Server Tags: , ,

Multi-Path Replication (MPR) technology : Replication Server 15.7

January 7th, 2012 No comments

 The imminent release of Replication Server 15.7 continues pushing envelop and maintaining its leading edge by introducing new Multi-Path Replication (MPR) technology.

So, what is MPR? MPR improves replication performance and reduces latency by enabling parallel paths of data from the source database to the target database. These parallel paths will process data independently of each other to improve overall efficiency, performance and load balancing.

Full Source @ http://blogs.sybase.com/zhangb/2011/12/replication-server-improves-performance-and-reduces-latency-with-mpr/#respond

Note :

 What about the order of transacation , that need to maintain at target side?
Even transaction can come rapidally at target , but it must be applying in a order.

Commit order is maintained within single path. To increase performance on a single path, one can employ parallel DSI, Bulk copy and HVAR features RS has introduced in earlier releases. To take advantage of MPR, users need to fully understand application schema to divide them as commit order is not guaranteed among paths.

 

Categories: News, Replication Server Tags: