Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

The Problem of Dropping ACID: Non-ACID PoS Is Unsuitable for Bitcoin and Financial Transactions

$
0
0

Sour.jpg

I have been following a fascinating story about Coinbase and bitcoins for a few of days. It is better than any other TV show at the moment.

 

There are Silicon Valley celebrities, VCs, drama, money, lies, and scandal involved. You will not hear about it in mainstream tech blogs, because too much money is at stake.

 

Instead, you see them singing Coinbase praises.

 

But if you watch closely, you will see outrage, confusion, and anger of the people on the internet, especially those who had purchased bitcoins recently through Coinbase.

 

Coinbase: a Bitcoin Wallet

 

“Coinbase is a cloud-based Bitcoin wallet that allows users to buy and sell bitcoins with a US bank account, send and receive bitcoins with an email address, and accept bitcoin payments as a merchant.”

 

- Quoted from Wikipedia

 

Coinbase, one of the latest Silicon Valley darlings that recently raised 25 million dollars from Andreessen Horowitz, is now caught in a PR crisis: their customers accused them of being liars and thieves, because money was taken from these customers’ bank accounts through ACH in exchange for bitcoins at market value. In one case, the amount withdrawn was US$35,104. In another, US$10,701.

 

However, in both cases, the money simply vanished from bank accounts, and no bitcoins were ever received in return.

 

Feeling uneasy about this delay, these people reached out to Coinbase’s customer service and heard either no reply or were treated with rudeness. They escalated the case to Hacker News and Reddit in order to be heard, and to seek justice.

 

Coinbase has explained that it was going through system upgrade, and there may have been data loss. They told people these problems “are only costing people theoretical profits, and not actual”.

 

The ACID Test

 

Upon closer examination, one will find that Coinbase chose MongoDB for their primary datastore, which might have caused these troubling transactional errors due to data loss.

 

MongoDB certainly has great advantages in certain scenarios, such as storing and processing massive amount of unstructured data, but the tradeoffs includes dropping ACID properties that you would find in SAP HANA, which is essential for financial transactions to be reliable.

 

Quora user Chris Schrader said, “…(not being ACID compliant) means that two people looking at the exact same key may see different values.  For example, if I update key 123 from ABC to DEF, and two people query MongoDB for key 123, 1 person might see ABC and one person might see DEF.

This is because MongoDB is distributed over many nodes and will eventually "sync" the data across those nodes (referred to as eventual consistency).  A traditional RDBMS will "lock" the updated data across all its nodes when performing the update in order to ensure that everyone sees the same thing.  That is pretty important if you're storing something like financial transactions.”

 

What Does Being ACID Compliant Mean

 

In computer science, ACID is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction.”

 

(Quoted from Wikipedia)

 

In the world of financial transactions, ACID means:

 

  • A (Atomicity) - a transaction to transfer funds from one account to another involves making a withdrawal operation from the first account and a deposit operation on the second. If the deposit operation failed, you don’t want the withdrawal operation to happen either.
  • C (Consistency) - a database tracking a checking account may only allow unique check numbers to exist for each transaction
  • I (Isolation) - a teller looking up a balance must be isolated from a concurrent transaction involving a withdrawal from the same account. Only when the withdrawal transaction commits successfully and the teller looks at the balance again will the new balance be reported.
  • D (Durability) - A system crash or any other failure must not be allowed to lose the results of a transaction or the contents of the database. Durability is often achieved through separate transaction logs that can "re-create" all transactions from some picked point in time (like a backup).

 

(Quoted from Stackoverflow)

 

We do not know exactly why the above-mentioned transactions failed to be completed by Coinbase, but since ACID helps insure the reliability of financial transactions, people have been questioning the choice of using MongoDB as the main datastore since the early days of Coinbase.

 

Making the Right Choice

 

SAP HANA supports the ACID principles, which protects data integrity by making sure the updates will complete together or the database will roll back to pre-transactional state. The bitcoin fiasco we’re seeing now might have been avoided, by choosing a database such as SAP HANA.


(Photo credit: Flickr)


Knowledge Sharing from TechEd Bangalore - SAP Business Application Accelerator - SAP HANA DB as Secondary DB

$
0
0

Introduction:

This topic will explain the advantages of SAP Business Application Accelerator which Allows leveraging HANA for current programs without changing them or without disruption or without changing the code. SAP Business Application Accelerator allows redirection of database queries (like select, open cursor) to a secondary database connection. In this case we are going to use HANA as secondary DB.

This blog doesn’t explains you the HANA Specific Application Accelerators like CO-PA or Moving data intensive operations down to HANA, rather a simple use case to see the advantages of using SAP Business Application Accelerator and HANA Database on the current existing programs without changing the code by calculating the runtime of the program.


How it works:

Runtime intensive table selects can be redirected to the HANA database instead being selected from the primary database.

  • Selected tables are replicated using SLT in near real-time to the HANA database
  • The following parameters identify the program context for reading from the HANA database:
    • Tables / DB Views
    • Main program
    • Batch job name (wildcards supported)
  • On kernel level a select statement within this configuration is redirected to HANA – no changes on program required


Use Cases:

  • Reports reading data from very large ERP tables
  • Reports that evaluate large tables by different characteristics – so that there is no index on all access path
  • Batch Jobs that often reports that run during period end closing


System prerequisite:

Kernel version 7.21 and the SAP Business Application Accelerator Add On.


Demo - Overview:

1.png

 

Demo - Step-by-Step:

  1. Establishing a Connection a connection between ERP system and HANA DB. In this case we are connecting our ERP System to HANA Database (System – HDB). We can also check available DB Connections, Indicated with arrow in the screen shot.

 

2.png


     2. SLT: SLT is a process to replicate the data from primary to secondary database. In this case we will identify time intensive – tables / program                            combinations to be redirected to HANA and not to the primary database for read access.


     3. In this demo program: ZR_COEP is used which uses the ZCEOP table. This table will be replicated in HANA DB using SLT.


     4. Maintaining the Configuration by preparing the XML with scenario, program and table names after identifying the program and table.


3.png

     5. Configuration is maintained by using RDA_MAINTAIN program. First XML will be upload using the ‘Upload Scenario’. Once the XML is uploaded, a                      database connection has to be maintained (second screen shot) and last we will Activate the Scenario. Configuration entries are maintained in                            RDA_CONTROL, RDA_CONFIG and RDA_CONTEXT table. You can validate your configuration entries here.


4.png


5.png

     6. Profile Parameters rsdb/rda is used to redirect the DBs using RZ11 transaction Code. It acts like a switch between two databases. Please see the                      prerequisite to avail the profile parameters. Please make sure the scenario which we have uploaded in previous step has to in active status, only then the            redirection of DBs works.


6.png

7.png

     7. Analyzing the Runtime by testing the runtime of the program in both DBs by setting profile parameter values as on/off.

     Test program is written with a select statement using aggregate functions.

     It takes an average of 25 seconds to pick the data from primary database.

     Now turn on the profile parameter and execute. Total number of records in HANA DB is more than Primary DB (an additional record was added).

     It only took 4 seconds to run the program.

 

8.png9.png

10.png

Limitations:

  1. SAP Business Application Accelerator can not be used for programs where delayed replication of a table could lead to inconsistencies
  2. No update scenarios during interactive reporting on replicated tables possible
  3. SAP Business Application Accelerator does not leverage all HANA capabilities and does therefore not reach the same performance as possible with modeling in HANA Modeling Studio or with HANA Application Accelerators (e.g. COPA Accelerator)
  4. Check Note 1694697

SAP HANA: Handling Dynamic Select Column List and Multiple values in input parameter

$
0
0


Hello Folks,


Its been a long time since i blogged. Hope you are all doing good and enjoying SAP HANA and its exciting innovations.


In this document we will be discussing on how to handle multiple values passed in to filter condition using “REPLACE” Function in procedure and also Dynamic SQL using Execute Immediate.


A) Problem Description:


--> How to handle multiple values in filter condition ?


Example:


If we have “Region” as a filtering criterion and we have 3 Regions
namely: AMR, APAC and EMEA and the user want the flexibility to select the list of regions for which he wants to analyze the data.


We can use “Replace” Function to split the multiple values coming from input


--> Handling Dynamic select column list in the output ?


Example:


If we have a field like “Employee Type” as criteria on which the Output
columns selected is depended up on as shown below:


a) If “Employee Type” = 1 then the user should be able to see Column1, Column2, Column3
b) If “Employee Type” = 2 then the user should be able to see Column1, Column3, Column4

c) If “Employee Type” = 3 then the user should be able to see Column 2,Column3, Column4


We can use “Execute Immediate” for the above-mentioned example to retrieve the data.


 

Will be explaining the above mentioned 2 functionalities in detailed using the below example:


B) Preparing Data:


First, Let us create a table named “EMPLOYEE” with the following DDL and insert some test records for our example as shown below:


/* Creating a table named Employee */


CREATE COLUMN TABLE EMPLOYEE" ("EMP NO" INTEGER CS_INT,

"EMPLOYEE NAME" VARCHAR(200), "EMPLOYEE TYPE" INTEGER CS_INT, "GENDER" VARCHAR(10),
"AGE" INTEGER CS_INT,

"REGION" VARCHAR(10), "SALARY" DECIMAL(18,

0) CS_FIXED)


/* Insert Statements */


/*Employee Type = 1: Cricket Players */

Insert into “EMPLOYEE" values (1,'Sachin',1,'M',40,'APAC',50000);

Insert into “EMPLOYEE" values (2,'Ganguly',1,'M',42,'APAC',40000);

Insert into “EMPLOYEE" values (3,'Dravid',1,'M',40,'AMER',40000);

Insert into "EMPLOYEE" values (4,'Laxman',1,'M',43,'AMER',40000);

Insert into "EMPLOYEE" values (5,'Dhoni',1,'M',35,'EMEA',40000);

Insert into “EMPLOYEE" values (6,'Sehwag',1,'M',36,'EMEA',30000);

Insert into "EMPLOYEE" values (7,'Kohli',1,'M',23,'EMEA',20000);

Insert into "EMPLOYEE" values (8,'Kumar',1,'M',22,'EMEA',10000);


/*Employee Type = 2: Tekken Players */

Insert into “EMPLOYEE" values (1,'Law',2,'M',24,'APAC',30000);

Insert into “EMPLOYEE" values (2,'Eddie',2,'M',26,'EMEA',150000);

Insert into “EMPLOYEE" values (3,'Paul',2,'M',23,'APAC',120000);

Insert into “EMPLOYEE" values (4,'Howrang',2,'M',22,'AMER',60000);

Insert into “EMPLOYEE" values (5,'Xiayou',2,'F',22,'AMER',8000);

Insert into “EMPLOYEE" values (6,'Nina',2,'F',22,'AMER',70000);


/*Employee Type = 3: Tennis Players */

Insert into “EMPLOYEE" values (1,'Federer',3,'M',30,'APAC',1150000);

Insert into “EMPLOYEE" values (2,'Nadal',3,'M',29,'APAC',5230000);

Insert into “EMPLOYEE" values (3,'Djokovic',3,'29',24,'APAC',5045000);

Insert into “EMPLOYEE" values (4,'Murray',3,'M',24,'APAC',55650000);

Insert into “EMPLOYEE" values (5,'Sampras',3,'M',44,'AMER',5660000);

Insert into “EMPLOYEE" values (6,'Agassi',3,'M',45,'AMER',5056000);

Insert into “EMPLOYEE" values (7,'Venus',3,'F',28,'AMER',9500500);

Insert into “EMPLOYEE" values (8,'Serena',3,'F',29,'AMER',9507000);


/*Employee Type = 4: Football Players */

Insert into “EMPLOYEE" values (1,'Messi',4,'M',24,'APAC',510000);

Insert into “EMPLOYEE" values (2,'Ronaldo',4,'M',28,'AMER',500);

Insert into “EMPLOYEE" values (3,'Xavi',4,'M',30,'EMEA',5002300);

Insert into “EMPLOYEE" values (4,'Beckham',4,'M',40,'EMEA',7850000);


Now we have the data in the “EMPLOYEE” table. Which has data for 3 Regions and 4 Types of employees.


C) Our Scenario:


1) Below are the conditions on Output column list based on “Employee Type” Selected by the user:


a) If “Employee Type” = 1 then the user should be able to see EMP NO, EMPLOYEE NAME, GENDER, AGE, REGION, SALARY
b) If “Employee Type” = 2 then the user should be able to see EMP NO, EMPLOYEE NAME, AGE, REGION, SALARY

c) If “Employee Type” = 3 then the user should be able to see EMP NO, EMPLOYEE NAME, GENDER, REGION, SALARY
d) If “Employee Type” = 4 then the user should be able to see EMP NO, EMPLOYEE NAME, GENDER, AGE, REGION


2) Also we will enable the option of selecting  “Region” list of his choice while analyzing the data to the user:


a) AMR

b) APAC

c) EMEA


Let us create a procedure named EMPLOYEE_DETAILS and see how we can achieve this:


CREATE PROCEDURE EMPLOYEE_DETAILS

======================================================================================================

-- Description : Procedure for Explaining dynamic sql using execute immediate and Replace function

======================================================================================================

(

EMPLOYEE_TYPE VARCHAR(5),
REGION VARCHAR(10)) LANGUAGE SQLSCRIPT AS

BEGIN

DECLARE VAR_REGION VARCHAR(10000); DECLARE SQL_STR VARCHAR(3000); DECLARE VAR_EMPTYPE INTEGER; DECLARE REGION_FILTER VARCHAR(10000);

DECLARE SQLERRORS CONDITION FOR SQL_ERROR_CODE 10001;


/* Declaring the exception handler to log the SQL query which resulted in SQL errors */


DECLARE EXIT HANDLER FOR sqlexception
BEGIN
SQL_STR := 'SQL Error Exception. The Query Executed is: ' || SQL_STR || ' The Error Message is: '|| ::SQL_ERROR_MESSAGE; SIGNAL SQLERRORS SET MESSAGE_TEXT = SQL_STR;

/* To get the query in the output message itself */

END;


VAR_EMPTYPE := EMPLOYEE_TYPE; VAR_REGION := REGION;

REGION_FILTER := ' AND 1=1';  /* If no Region values are sent as input parameters then it will result in null value hence using a default true condition */


/* Forming Region Filter  Condition Using REPLACE Function */


IF (REGION <> '%')

THEN SELECT ''''|| REPLACE (:VAR_REGION, ',', ''',''')||''''  INTO REGION_FILTER FROM DUMMY ;


REGION_FILTER:= ' WHERE REGION IN ('||REGION_FILTER||')';

END IF
;


-----FORMING EMPLOYEE TYPE CONDITION-----------


IF (VAR_EMPTYPE = 1) THEN


SQL_STR := 'SELECT "EMP NO", "EMPLOYEE NAME", "GENDER", "AGE", "REGION", "SALARY" FROM "EMPLOYEE" '||REGION_FILTER||' AND "EMPLOYEE TYPE" = 1';

/* to dynamically form the Select statement */

EXECUTE IMMEDIATE (:SQL_STR);

/* Executing the string using Execute Immediate */


ELSEIF (VAR_EMPTYPE = 2)

THEN

SQL_STR := 'SELECT "EMP NO", "EMPLOYEE NAME", "AGE", "REGION", "SALARY" FROM "EMPLOYEE" '||REGION_FILTER||' AND "EMPLOYEE TYPE" = 2';

/* to dynamically form the Select statement */

EXECUTE IMMEDIATE (:SQL_STR);

/* Executing the string using Execute Immediate */


ELSEIF (VAR_EMPTYPE = 3)

THEN

SQL_STR := 'SELECT "EMP NO", "EMPLOYEE NAME", "GENDER", "REGION", "SALARY" FROM "EMPLOYEE" '||REGION_FILTER||' AND "EMPLOYEE TYPE" = 3';

/* to dynamically form the Select statement */

EXECUTE IMMEDIATE (:SQL_STR);

/* Executing the string using Execute Immediate */


ELSE

SQL_STR := 'SELECT "EMP NO", "EMPLOYEE NAME", "GENDER", "AGE", "REGION" FROM "EMPLOYEE" '||REGION_FILTER||' AND "EMPLOYEE TYPE" = 4';

/* to dynamically form the Select statement */

EXECUTE IMMEDIATE (:SQL_STR);

/* Executing the string using Execute Immediate */


END IF ;

END ;


 

D) Checking the output:


Now let us see the output if it is working fine as desired.


Case 1: If the user wants to get the details for Employee Type = 1 i.e. cricketers in Regions: AMR, APAC.


Screen Shot 2013-12-17 at 4.26.05 PM.png


As shown above only AMR, APAC data is shown and only the desired column list for Employee type = 1 i.e. EMP NO, EMPLOYEE NAME, GENDER, AGE, REGION, SALARY is shown.


Case 2: If the user wants to get the details for Employee Type = 4 i.e. Football players in Regions: EMEA.


Screen Shot 2013-12-17 at 4.26.51 PM.png


As shown above only EMEA data is shown and only the desired column list for Employee type = 1 i.e. EMP NO, EMPLOYEE NAME, GENDER, AGE, REGION is shown.


Hope you have enjoyed the blog. Let me know your thoughts on the same.


Your's

Krishna Tangudu

 

 





 

 

FAQ: What is SAP River?

$
0
0

If my memory serves me correctly, SAP River was first mentioned after the acquisition of Frictionless E-Sourcing - the acquisition came with a rapid application development platform, which was subsequently jettisoned. We briefly saw the SAP River product back in 2010, where it was briefly showcased as a development concept.

 

Fast-forward 3 years and we now have the first - Early Adopter - release of the SAP River software. I've been looking at it over the weekend - and thought I'd give some first impressions, and start up the FAQ.

 

What is SAP River?

 

SAP River is a development platform based on the SAP HANA Platform, that allows expression of intent, to build entire business applications.

 

In short, you to express what you want to build, and how you intend for it to be built - using the River Development Language, and River builds everything for you. You describe entities, relationships, views and actions and SAP River will build things like database tables, entity relationships, stored procedures, views and OData services. All you have to build on top of this is a UI layer which can consume OData. For example, using SAP's SAPUI5 development platform, which is also included in SAP HANA.

 

What is SAP River not?

 

SAP River is not a platform in itself (rather, it relies upon SAP HANA for this) and it is not a UI. You can build the UI in anything that can consume OData, though SAPUI5/OpenUI5 and the SAP HANA UI Integration Services is a logical choice of UI.


What are the use cases for SAP River?

 

In my opinion, it is well suited to a few things:

 

- Standalone Apps built by app vendors and resold as apps or templates

- Extendible HANA Apps built by SAP

- Extending existing Apps like SAP HANA Live

- Small-Medium sized ERP Apps

- Big Data Apps

 

What is the River Development Language?

 

River Development Language, or RDL, is the language for expression entities, views, associations and actions using SAP River. I found a document here that describes this in more detail, if you're interested.

 

Will RDL or SQLScript become the go-to language for SAP HANA Development?

 

RDL and SQLScript are complementary and serve different purposes.

 

SQLScript is HANA's stored procedure language - fantastic for manipulating large volumes of data under the cover and returning a result set. It can be used in RDL, in SQL, via ODBC, JDBC, OData or MDX and native HANA development. Most HANA projects will use SQLScript somewhere.

 

RDL is a development language for expressing whole business scenarios, and it is one option for HANA Development. Certainly, not all HANA projects will use RDL, but only those that decide to use it as the main language for their scenario. RDL will not replace SQLScript, though it is possible to write SQLScript in-line with RDL.

 

So SAP River is in-Memory?

 

Yes, SAP River is based on SAP HANA so everything you build will be in-Memory. It is blazingly fast even with large data volumes and can of course use data replicated out of another system like SAP ERP, or any other database for that matter.

 

What can SAP River do that HANA can't do?

 

SAP River is built on the HANA Platform, so the short answer is: from a capability perspective, nothing.

 

However what River allows is very concise expression of development artifacts and a data-driven developer experience. Automatic documentation, for example. River isn't intended to be the solution to every problem, but rather a solution to the problem of wanting to build apps easily and quickly.

 

And this is what River can do: facilitate the build of Enterprise-Grade applications in hours, not weeks, and easy and fast changes. What's more, River allows quite sophisticated business logic to be built very easily, whilst HANA requires you to build your own stored procedures or Server-Side JavaScript.

 

What is the downside to SAP River?

 

In my opinion, SAP River is another weapon in the SAP HANA platform which you can use as you see fit. Ian Bradshaw told me he felt that River was just another development platform to learn, but I think it will appeal to a certain type of developer and use case, so I'm not sure I agree this is a problem.

 

Certainly, I think there are apps which will be better suited to custom SAP HANA Development, or using SAP Business Suite or BW on HANA. River doesn't preclude you from using those - even on the same data models - but it provides you a neat option.

 

Is SAP River Available now?

 

Sort of. It's available with SAP HANA SP07 via an Early Adopters Program - email earlyAdoption@sap-river.com for more details. This is designed for people who would like to take an application live.

 

You need to do some installation with SAP HANA SP07 to get it to work, and the SAP HANA Academy videos below are very helpful. With SAP HANA SP08 it will likely be included in the main installation.

 

What does SAP River run on?

 

SAP River will run on any SAP HANA SP07 appliance - on premise, or in the cloud. It requires > Rev.70 of SAP HANA so right now it's only available on premise, or with the developer edition on sap-river.com. It requires a delivery unit to be added to your SAP HANA appliance, some SAP HANA Studio plug-ins (which work on the Mac!) and a little configuration. Then, you're all good to go.

 

How is SAP River licensed?

 

SAP River is included in the HANA Enterprise license. This presumably includes HANA One on AWS, but as usual contact your account rep for more details.


Where do I go to find more information about SAP River?

 

SAP have done an excellent job of making River relatively available on day 1. I'm very impressed. Here are my favorite resources:

 

- SAP HANA Academy - Philip Mugglestone gets you up to speed with River in around 1 hour of videos

- SAP-River.com - Get your own cloud development instance in 5 minutes or less

- SAP Help - there are are collection of reference documents to get you up and running

- Introducing SAP River - a document on SCN

 

Final Words

 

I've yet to build much in SAP River, but I found it exceptionally easy to get started. I'm thinking of building a small app over the holiday period, and I'll share the experience. Does anyone have any suggestions for what to build?

 

Any questions you'd like to know that I missed?

Using Text Joins to enrich Attributes with "Texts" in SAP HANA with SAP BO Analysis Office

$
0
0

Hello Folks,


In this blog, i will walk you through on how to enrich the Attributes with "Texts" in SAP HANA with SAP BO Analysis Office as reporting tool.


Problem Description:


To enable the description for “Country Key” column in KNA1 table using T005T table by “Text Join” and using “Label Column” to enable “Text and Key” for the same using SAP BO Analysis office.


 

As shown below We can get the Country for each customer number but we cannot get the description of the country here:


Screen Shot 2013-12-26 at 11.07.39 AM.png


 

To enable the country description, let us create a attribute view named “AT_TEXT” as shown below:


Joining Conditions:


KNA1.MANDT = T005T.MANDT

KNA1.LAND1 = T005T.LAND1


Language column for Text Join: SPRAS


Screen Shot 2013-12-26 at 11.08.14 AM.png


Screen Shot 2013-12-26 at 11.09.04 AM.png


 

Hence we finished our attribute view; let us now preview the data.


Output:


Screen Shot 2013-12-26 at 11.09.46 AM.png


 

Now Joining this attribute view to the fact table using Analytic view and enabling the country description columns in AN_TEXT


Screen Shot 2013-12-26 at 11.11.04 AM.png

 

Data Preview:


Screen Shot 2013-12-26 at 11.11.48 AM.png

 


 

Connecting to Analysis Office:


1) Selecting the data source to login to the relevant SAP HANA database as shown below:


Screen Shot 2013-12-26 at 11.12.19 AM.png


 

2) Choose the data source and login as shown below:


Screen Shot 2013-12-26 at 11.12.54 AM.png


Screen Shot 2013-12-26 at 11.13.24 AM.png

Select AN_TEXT


Output in AO:


Screen Shot 2013-12-26 at 8.29.27 PM.png

 

Note: For Calculation Views we have to use keep the texts in “Description Mapping” column as shown below:

Screen Shot 2013-12-26 at 11.14.11 AM.png


Thanks for taking time in reading this blog. Do let me know your feedback on this.


Your's

Krishna Tangudu

 

 

 

 

 

 

Applying YTD in SAP HANA with SAP BO Analysis Office

$
0
0


Hello Folks,


This Blog will help you to understand on how to address YTD, which is more of a common scenario in SAP Bex reports.


Problem Description:


To apply Year to Date filtering which is a more common scenario with SAP Bex Reports.


YTD: Year To Date – The report should show the values from starting day of the year till the date provided by the user


We need to define an input parameter to take the input of the day value till which the report has to be shown as done below:


Screen Shot 2013-12-26 at 1.13.08 PM.png


 

Now we need to create a calculated column to define the “FromDay” as shown below:


Screen Shot 2013-12-26 at 1.13.56 PM.png


 

We have a functionality named “Expression” in Calculation View (Graphical) in Projection or Aggregation nodes


 

We need to define the filter expression as shown below:


Screen Shot 2013-12-26 at 1.14.38 PM.png


 

Connecting to Analysis Office:


 

1) Selecting the data source to login to the relevant SAP HANA database as shown below:

 


Screen Shot 2013-12-26 at 1.15.53 PM.png


 

2) Choose the data source and login as shown below:


Screen Shot 2013-12-26 at 1.16.31 PM.png


 

Output in AO:

 


Screen Shot 2013-12-26 at 1.17.03 PM.png


If you see carefully in the output, i have applied cross tab to get the output in the way we get in Bex using customer exit.

We can also apply other KPI scenario's like Previous Year Vs Current Year comparison, etc..


Hope this blog is helpful to you. Do share your feedback on the same.


Hope you understood on how to apply YTD when using SAP HANA with SAP BO Analysis Office as reporting tool.


Your's,

Krishna Tangudu

 



 

 

 

 

 

SAP HANA: Using Commit & Rollback in exception handling block of a Stored Procedure

$
0
0


Hello Folks,

 

In this blog, i would like to share my experiences on using Commit & Rollback in exception handling block of a stored procedure in SAP HANA.


Firstly I want to thank Nelson my colleague who helped me in getting this work around and guided me in writing this blog.

 

 

Problem description:


If we are using a procedure to load data into any table and if the procedure encounters any error while loading data it may result in faulty data when we are reloading the same table.


 

To ensure consistency of the data we need to “Roll back” the previous transaction happened and not to commit the data into the table unless the procedure is completely successful.


 

In the below mentioned example, will be explaining on how to use “COMMIT” and “ROLLBACK” in exception handling block.


Let us take an example procedure, which does "insertion" of data into a table as shown below:


 

/* Creating a test table with column ‘EMPNO’ as Pk for our example */

CREATE TABLE EMPLOYEE_TEST  (EMPNO INTEGER PRIMARY KEY)


/* Creating a Procedure for testing Predefined Exception Handling */

CREATE PROCEDURE EXCEPTION_HANDLING_TEST AS

BEGIN


/*Pre Defined exception handler */

DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE AS "Error Code", ::SQL_ERROR_MESSAGE AS "Error Message" FROM DUMMY; -- To display error code and message


INSERT INTO EMPLOYEE_TEST VALUES (1);

INSERT INTO EMPLOYEE_TEST VALUES (1); -- Again inserting 1 will result in unique violation error


END;


/*Calling the procedure*/

CALL EXCEPTION_HANDLING_TEST;



/*Output which shows the predefined error code along with the error message as shown below*/


Screen Shot 2013-12-26 at 1.46.34 PM.png



 

We are able to handle the exception and show the error message but still the first insertion is completed and the table holds the data which is not desirable as shown below:


Screen Shot 2013-12-26 at 1.37.24 PM.png


 

so we will use “ROLLBACK” to stop the insertion happening if the procedure encounters any error while loading as shown below:


 

/* Creating a Procedure for testing Predefined Exception Handling */


CREATE PROCEDURE EXCEPTION_HANDLING_TEST AS
BEGIN


DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;


INSERT INTO EMPLOYEE_TEST VALUES (1);

INSERT INTO EMPLOYEE_TEST VALUES (1); -- Again inserting 1 will result in unique violation error


COMMIT;


END;


 

We get an compilation error “Feature not supported” as shown above while trying to use “ROLLBACK” in the exception handler block or using "COMMIT" in the procedure as shown below:


Screen Shot 2013-12-26 at 1.44.48 PM.png


 

Let us try taking the “ROLLBACK” ,”COMMIT” into a variable as a work around and try to execute them and see if it works fine as shown below:


/* Creating a Procedure for testing Predefined Exception Handling */


CREATE PROCEDURE EXCEPTION_HANDLING_TEST AS
BEGIN


DECLARE  var_commit  VARCHAR(100) := 'COMMIT';

DECLARE var_rollback VARCHAR(100) := 'ROLLBACK' ;


DECLARE EXIT HANDLER FOR SQLEXCEPTION


BEGIN


EXEC (:var_rollback);

SELECT ::SQL_ERROR_CODE AS "Error Code", ::SQL_ERROR_MESSAGE AS "Error Message" FROM DUMMY;


END;


INSERT INTO EMPLOYEE_TEST VALUES (1);

INSERT INTO EMPLOYEE_TEST VALUES (1); -- Again inserting 1 will result in unique violation error


EXEC (:var_commit);


END;


Screen Shot 2013-12-26 at 2.02.54 PM 1.png


 

The work around has worked we are able to compile the procedure with out any errors.

Now let us call the procedure and check if the inserts are getting rolled back or still a record is getting inserted.


 

CALL EXCEPTION_HANDLING_TEST;


Screen Shot 2013-12-26 at 2.04.12 PM.png


SELECT * FROM EMPLOYEE_TEST;


Screen Shot 2013-12-26 at 2.04.45 PM.png


 

Thus we are able to achieve the intended result.


Hoping this document has helped you in understanding on how to use "COMMIT" and "ROLLBACK" in “Exception Handling Block” in SAP HANA.


Please let me know your feedback on this.

 

 

 



 

 

 

 

Your's

Krishna Tangudu


 

 

 

 

 

Connect ERP to HANA server for COPA

$
0
0

Introduction:

 

The SAP ERP Rapid-deployment solution for profitability analysis with SAP HANA provide functionality for customer to speed up the reporting ans transactional processes in CO-PA by using the SAP HANA database as secondary data persistence.

 

 

Prerequisites

 

  • It is recommended to use SAP kernel/DBSL 7.20

 

  • Only an SAP kernel 64-bit is supported. See SAP Note 1700052 if you use a non-Unicode system

 

  • Download HANA Client and HANA studio

 

1.       Install HANA client on the ERP server

  (You must install the HANA Client on all the application server – Note 1517236 1597627)

  After the installation you will see the folder - C:\Program Files\SAP\hdbclient

2.       Enter the path for the DB client software <lw>:\usr\sap\<SID>\hdbclient in the environment variable PATH for the user <SID>Adm

1.png

Picture 1.0

1.       Restart the server!!!

2.       Go to transaction code SM30 in table DBCON

And create new entries

2.png

Picture 1.1

  • DB connection – HANA SID
  • DBMS – HDB
  • User name – the user name you are creating in the HANA Studio (see next page)
  • DB password – password of the user name
  • Conn. Info – HANA server FQDN with port 30015

3.PNG

Picture 1.2

DBACockpit– Test the connection                                  

When you create the connection in table DBCON you can see the connection in the DBA cockpit under DBconnections

4.PNG

Picture 1.3

To test the connection you need to add the system to System configuration

5.png

Picture 1.3.1

Enter add

6.png

Picture 1.3.2

In the next pics enter this parameter

7.png

Picture 1.3.3

  • System – HANA SID
  • Mark the database connection, and choose the name of the connection from the DBcon table

8.PNG

Picture 1.3.4

Now you can test the connection :

9.PNG

          Picture 1.3.5

10.PNG

    Picture 1.3.6

    And check the result

11.PNG

     Picture 1.3.7

Test the connectionwith - ADBC_TEST_Connection

    Go to transaction code SA38 – enter the ADBC_TEST_Connection and execute

                           Select the name of the connection from the checkbox

12.PNG

Picture 2.0

Execute  an Check the result

13.PNG

Picture 2.1

Sap sources:

Note 1597627 - SAP HANA connection

Note 1559994 - Profitability analysis with Accelerated CO-PA

Note 1632506 - SAP ERP RDS for profitability analysis with SAP HANA

 

Thanks

Naor Shalom


Connect HANA to SLD and Solution Manager

$
0
0

Connect HANA to SLD and Solution Manager

 

First Step: Register on SLD

 

To connect HANA to Solution manager you have to register the HANA in SLD

Using Life cycle manager

 

 

14.png

Picture 1.0

 

Open the HANA studio and go to Life cycle manager

 

15.png

 

Picture 1.1

 

  • SLD hostname
  • SLD port
  • SLD username
  • SLD password
  • HANA SP06

 

HANA SID on LMDB

 

16.PNG

Picture 1.2

 

In picture 1.2 you can see the hana register on LMDB

17.PNG

Picture 1.3

 

 

18.PNG

Picture 1.4

 

System configuration on Solution Manager

 

Go to Solman workcenter - Solution Manager configuration - Manage system configuration

Select the HANA SID - configure system

19.PNG

Picture 2.0

 

Assign Product to the system

20.PNG

Picture 2.1

 

In this step, you assign the diagnostics agents to the managed systems. You assign a diagnostics agent to each server (virtual host) on which the managed system is running

 

21.PNG

Picture 2.2

 

HostAgent check success for host "HANA server"

 

22.PNG

Picture 2.3

 

In this step, you specify all system parameters required to configure the managed system

 

SQL port  - Port number of HANA instance

User name  - HANA DB user - Should be "System user"

 

 

23.PNG

Picture 2.4

 

In this step, you verify or add landscape parameters.

 

Save the landscape objects

 

24.PNG

Picture 2.5

 

In this step, SAP Solution Manager performs automatic configuration activities.

 

  • DATABASE Extractor Setup
  • SAPRouter Setup
  • Extractor Setup
  • Introscope Host Adapter
  • Alerting Setup for Early Watch Alert

26.PNG

 

Picture 2.6

 

In this step you perform manual configuration activities.

 

25.PNG

Picture 2.7

 

Logical component assignment not required for this system type

 

27.PNG

Picture 2.7

    

System Landscape - Solution Manager SMSY

 

Errors :

 

Symptom:

 

A CCDB extractor returns the error message "CCDB: A fatal error occured for a store". Another symptom is that the CCDB Administration reports errors in the column 'Fatal Errors' for a managed system.

 

Solution:

 

The detailed error message can be found in the application log of the Solution Manager. Call transaction 'SLG1' and set object = 'CCDB', Ext. Identifier = '*<SID>*' and from date = '<Yesterday>'. Search for red entries containing the text 'Fatal Error...'. Having the detailed error message please search for notes on component SV-SMG-DIA*.

 

Symptom:

 

SLG1

----Fatal Error: CX_PERSIST_STORE_UPL_DATE_1: The upload date '16.09.2013 14:52:56 UTC'  is in the future (Check rang

 

Solution:

 

Note 1891416 - CCDB CTC and DB Extractor - upload date is in the future

Note 1731373 - CCDB: Problems and solutions

 

SAP SOURCES:

 

Note 1747682 - SolMan 7.1: Managed System Setup for HANA

Note 1625203 - saphostagent/sapdbctrl for NewDB

Note 1891416 - CCDB CTC and DB Extractor - upload date is in the future

Note 1731373 - CCDB: Problems and solutions

 

Thanks

Naor Shalom

 

6 Golden Rules for New SAP HANA Developers

$
0
0

I've been developing code in SAP HANA for nearly 3 years now, and at the end of 2013, there is now a huge influx of developers. This is good, because it means that mass adoption is here. But it also means that we see a lot of the same mistakes being made. I did some analysis of the questions asked in HANA development forums and thought I'd pick out the most common mistakes that newbie HANA developers make and give my advice.

 

The main thing I've noted is that every developer comes to SAP HANA with a set of misconceptions about HANA and a set of experience of some other technology. HANA is a little different to other application platforms and you need to change your thinking. I hope this helps you in your journey.

 

Never use row-based tables

 

If you are an Oracle, IBM, or Microsoft person then you are institutionalized into thinking that you put OLTP data in the row-store and OLAP in the column-store. This is not true with SAP HANA! You must create all tables in the column store and that can be used for both transactional and analytic scenarios.

 

HANA does have a row store and you can create row-oriented tables for some very specific scenarios:

 

- Transient data like queues, where you insert and delete a lot and the data never persists

- Configuration tables which are never joined, where you select individual entire rows

- When you are advised to by a SAP support personnel

 

But in general, never used oriented tables, especially for OLTP/Transactional scenarios. HANA is optimized to use the column store for combined transactional and analytical scenarios.

 

Never create indexes

 

Again if you come from the traditional RDBMS space, you will see that HANA allows the CREATE INDEX command.

 

However, when you create a table in HANA (and I'm simplifying), it is in fact creating a set of sorted, compressed and linked indexes. As a result, secondary indexes almost never improve performance. In fact, I've never come across a scenario where an index improved performance of an analytic query, where a large volume of data is aggregated.

 

There is one scenario when a secondary index can improve performance: when you have a query which selects a very small amount of data from a very large table (or group of joined tables). In this instance, creating a secondary index on all your sort columns can allow HANA to find the data faster. But this is a very specific situation - the simple advice is, never create indexes.

 

Don't use the SAP HANA Modeler Perspective

 

HANA has 3 developer perspectives, the SAP HANA Systems View, Modeler, and Developer Perspective. Take the time to read the developer guide and setup the Developer perspective. This will bring you the ability to put all your development artifacts including tables, information views, stored procedures, plus OData and HTML artifacts if you need them. You get change management, version management, the ability to test inactive objects, code completion and a bunch of other things.


Please don't create models directing in the SAP HANA Content Repository any more.

 

Don't use SQLScript unless you have to

 

SAP HANA provides a powerful stored procedure language, but its power is a bad thing for new SAP HANA Developers. It allows you to write very inefficient code which doesn't parallelize.

 

Most of the scenarios I see on the forums that developers are coding could be better done with Information Views like Attribute Views, Analytic Views and Calculation Views. And in SAP HANA SP07, Information Views are faster than SQLScript in almost every scenario. Plus, Information Views are easier for others to understand, remodel and change.

 

There are scenarios where you need SQLScript, but it shouldn't be viewed as a general-purpose solution to modeling problems.

 

If you have to use SQLScript, don't use Cursors or Dynamic SQL

 

If you are a PL/SQL or T-SQL developer than you will be familiar with Cursors and Dynamic SQL. I see a lot of questions in the forums related to performance problems with these. Avoid using them at all costs - this is detailed in the SQLScript Reference Guide.

 

There are a number of things that push you out of multi-threaded mode in SQLScript: Local Scalar Variables, Loops, Cursors and Dynamic SQL. All of these constructs will cause you performance problems if you use them, so you need to avoid them.

 

Especially don't use Dynamic SQL to generate column names for INSERT or SELECT statements! Instead, create an Information View for SELECT statements, or write a Python-based loader to load tables.

 

In many cases you can change a loop into a single SELECT or INSERT statement. In other cases, you can nest SQLScript procedures to improve performance. I'm thinking that this area needs a blog of its own.

 

Avoid JOINs on Large Data Volumes

 

Joins on very large tables (>100m rows) can be inefficient. If you have two large fact tables then never join them - performance will always be a problem.

 

Instead, you can normalize the data in Analytic Views and create a Union with Constant Values in a Calculation View. Werner Steyn describes this nicely in his Advanced Data Modeling guide. You can expect a very large performance increate for complex queries by using this mechanism.

 

Final Words

 

This isn't designed to be an exhaustive guide, but rather a compilation of the common mistakes that I see developers making when they're starting out with HANA. If you're new to HANA then please take the time to read them and think about what it means to your developer methodology.

 

Have I missed any obvious ones?

Using Multiple Values in Input parameter for filtering in Graphical Calculation View

$
0
0

Hi Folks,

 

In the previous blog SAP HANA: Handling Dynamic Select Column List and Multiple values in input parameter have shown on how to select multiple values for filtration using "Replace" function in procedure.

 

In that document, we went to procedure approach, since the user needs the dynamic output based on input conditions.

 

But if the output is static column list, you wouldn't want to use procedure, so in this blog will be explaining on how to achieve the same functionality of handling multiple values filter condition on a graphical calculation view using "Projection".

 

Problem Description:

 

Giving the flexibility of choosing the values he wishes to see in the report and then pushing the filtration logic to the lowest level possible.

 

Case 1: To select single value or multiple values for filter based on the input from the user

Case 2: To select "All" values if he doesn't want to apply any filter

 

**Note: If using "HTM5 Dashboards" , you can present the dropdown for filter as shown below:

Screen Shot 2013-12-30 at 10.18.25 PM.png

 

This approach is very useful, if your dropdown has more values in the selection like more than 100 selections.

 

Now, Let us create an analytic view for our testing as shown below:

 

Screen Shot 2013-12-30 at 9.18.06 PM.png

 

Now after adding the required fields to the "Output".

Let us create a "Input Parameter" to hold values of selection for "Region" from the user as shown below:

 

Screen Shot 2013-12-30 at 9.40.30 PM.png

 

Now Let us create the filter using "Expression" as shown below:

 

in("REGION",'$$In_Region$$') or match ("REGION",'*$$In_Region$$*')

Screen Shot 2013-12-30 at 9.44.42 PM.png

 

Data Preview:

 

Case 1 : ( Single or Multiple Values) :

 

(a) Single Value:


Screen Shot 2013-12-30 at 9.58.31 PM.png

Sql Statement:


SELECT "REGION","EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", sum("SALARY") AS "SALARY" FROM "_SYS_BIC"."_SYS_BIC"."projects/CV_EMPLOYEE" ('PLACEHOLDER' = ('$$In_Region$$', 'AMER')) GROUP BY "EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", "REGION"

 

Screen Shot 2013-12-30 at 9.57.28 PM.png

In the above screenshot we can see 1 value i.e AMER coming in output.

 

(b) Multiple Value:

 

You have to input the value like this example: AMER'',''APAC as shown below:

Screen Shot 2013-12-30 at 10.00.25 PM.png

Sql Statement:

 

SELECT "REGION","EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", sum("SALARY") AS "SALARY" FROM "_SYS_BIC"."projects/CV_EMPLOYEE" ('PLACEHOLDER' = ('$$In_Region$$', 'AMER'',''APAC')) GROUP BY "EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", "REGION"


Screen Shot 2013-12-30 at 10.02.25 PM.png

 

In the above screenshot we can see two values i.e AMER & APAC coming in output.

 

Case 2 : ( All Values -- No Filtering) :


We need to pass * as shown below :

Screen Shot 2013-12-30 at 10.08.45 PM.png

Sql Statement:

 

SELECT "REGION","EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", sum("SALARY") AS "SALARY" FROM "_SYS_BIC"."projects/CV_EMPLOYEE" ('PLACEHOLDER' = ('$$In_Region$$', '*')) GROUP BY "EMP_NO", "EMPLOYEE_NAME", "EMPLOYEE_TYPE", "GENDER", "AGE", "REGION"

 

Screen Shot 2013-12-30 at 10.11.29 PM.png

 

In the above screenshot we can see all the 3 regions coming in output.

 

**Note: This approach is useful when your reporting solution is HTML5 dashboards and you cannot use Variables (multiple values) for filtering.

 

Hoping that blog is helpful for you , do let me know your feedback on this.

 

Your's

Krishna Tangudu


Building an SCN Influencer Analysis App using SAP River, HANA and Lumira

$
0
0

Over on SAPHANA.com I posted a blog (it may not be live yet, bear with me!) about measuring Influencer Analysis using SAP River, HANA and Lumira. The other blog deals mostly with the analysis, whilst this blog is about the making of the app.

 

Where did the idea come from?

 

After SAP River was released, I came to think about potential use cases and I really wanted to build an app that's a bit more than the standard "movie casting" app that is in the developer notes. To do this, I needed an interesting data source and I was reminded of the beta SCN API which was created by Matthias Steiner and the SCN team. The SCN API is in beta for testing and legal reasons, so I can't reveal the means to access it. But, it is largely based on the Jive REST API.

 

I figured that I could use the code that I wrote a few days ago to integrate Python into River to inject data into SAP River. I thought I'd then start to use the power of the HANA platform by integrating HANA Text Analysis for Sentiment Analysis and then expose it using SAP Lumira. And then, to make it interesting, I gave myself one day to write the app and one day to create a story, document and blog about it.

 

The whole point of SAP River is that it's supposed to be easy to use and fast to develop, so this should be possible, right?

 

Building the River app

 

In my process this was a bit iterative, as I poked at the SCN API to find the data that I wanted, but here's my RDL code. It's pretty simplistic and it describes SCN Spaces, Content and Authors. I decided to put Blogs and Documents as Content, so I could easily aggregate based on both. I defined contentType as an enumerated type, and so when I insert them later, I specify which type of content I'm inserting into HANA.

Screen Shot 2013-12-31 at 1.40.17 PM.png

What's fantastic about RDL is that RDL then generates for you the HANA tables, views, entity relationships and the OData services. Done. Now we can get on with loading data. Here's a sample table:

 

Screen Shot 2013-12-31 at 1.57.39 PM.png

 

Loading data into HANA

 

This is pretty easy and I used Python as my language of choice, and Sublime Text for editing - thanks DJ Adams and Brenton O'Callaghan for the advice there. Here's my code to load into HANA. I'm sure there are better ways of doing this, I'm a hacker not a programmer.

Screen Shot 2013-12-31 at 1.47.57 PM.png

There are a few gotchas:

 

- The SAP River UTCTimestamp uses the OData format and requires dates in "milliseconds since the epoch" which is very frustrating. That's the reason for the weird time conversion code. Blame Microsoft for this!

- You have to re-encode the SCN Content and other UTF-8 data in JSON, or it will fail, hence the json.dumps

- I do some funny work to turn the blog URL into an ID for later use

- These aren't my real hostname, username or password :-)

- I found for complex views (e.g. give me all the spaces I haven't downloaded yet from SCN), it can be necessary to create HANA views and manual xsodata services. Not a big deal.

 

Enabling Text Search and Sentiment Analysis

 

That's the best part - and this couldn't be easier. It's one command! Note that this uses the Voice of Customer configuration, which includes sentiment analysis as well as text extraction. You can define your own dictionaries if you want to, but I didn't do this.

Screen Shot 2013-12-31 at 1.56.51 PM.png

Now, this actually creates a new database table called $TA_VOICE. It contains 1m text terms for my 40k pieces of content and it looks like this:

Screen Shot 2013-12-31 at 2.02.03 PM.png

Yes, I filtered on "unambiguous profanity"

 

When the underlying table is updated, the text index is updated with it.

 

Building the HANA Model

 

Note that I can also build the HANA model inside the SAP HANA Developer Perspective, right inside my RDL project. It's advantageous to do this because I can keep all my developer artifacts in one place, and transport them together between systems.

Screen Shot 2013-12-31 at 2.07.28 PM.png

 

I did this the regular HANA way - an Attribute View to join the Time Dimension, and then an Analytic view for my Content. This allows me to quickly aggregate and view data based on date, author, content and space. It takes 100ms to materialize the whole 40k row table.

Screen Shot 2013-12-31 at 2.03.44 PM.png

Now because my Voice of Customer table is also a fact table, I need to create a Calculation View so I can have a single Information Model. I do it like this:

Screen Shot 2013-12-31 at 2.06.01 PM.png

I now have one Information Model that can tell us any question about SCN data that we choose to ask. Unfortunately for either API or privacy reasons there are a bunch of things that I've not been able to extract, like Company, Country information or Badges, as well as ratings. It's a shame but such is life.


Connecting to HANA with Lumira

 

Now we can connect right on into HANA with Lumira.

Screen Shot 2013-12-31 at 2.10.53 PM.png

Our Influencer Dataset is immediately available and we can see our attributes and measures:

Screen Shot 2013-12-31 at 2.11.18 PM.png

And here's a sample graphic - Top 20 Blog/Document writers over all of SCN for 2013, also ranked by number of likes and replies. Congratulations Tammy Powlas!

Screen Shot 2013-12-31 at 2.21.06 PM.png

Conclusions


I hope this makes interesting reading, it was certainly very interesting to build this. You can head over to SAPHANA.com if you want to see a more detailed influencer analysis - this is the "building of" blog. It's worth noting that I started this at 9am on Monday, and it's now 2.30pm on Tuesday and the River app is built, data is troubleshooted and loaded (data is always the hardest thing), text analysis is complete and HANA models is designed. SAP Lumira analysis has been completed and two blogs have been written describing the process.

 

This is what I hoped to achieve and this is the point of SAP River!

 

In 2014, the SAP HANA Application Platform is clearly really going to come of age, and the ability to quickly build transactional apps using SAP River and push Big Data into SAP HANA is a very powerful concept. In addition, the ability to then add text search and analysis, spatial, predictive and graph capabilities to these is very exciting.

 

A quick thanks to Matthias Steiner and his SCN API, everyone who engaged with me on Twitter last night who gave me ideas to make this blog better, the SAP HANA, River and Lumira teams, all of whom are working with me right now to make the products even better.

 

Have a very Happy New Year, and I look forward to working with you all in 2014.

What's new in SAP HANA SPS 7 - Monitoring

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 7. To get the best overview of what’s new in SAP HANA SPS 7, see Ingo Brenckmann's blog on the SAP HANA community site. The delta presentation comparing changes since SPS 6 by SAP HANA Product Management can be found on the SAP HANA web site.

 

The topic of this post is monitoring and complements a number of tutorial videos posted to the SAP HANA Academy site and on YouTube:


 

Other blogs on the What's New topic

 

 

What's New with SPS 7?

 

A "dazzling highlight",  in the words of Lars Breddemann are the new graphical editors: Memory Overview (shown below), Resource Allocation and Memory Allocation Statistics. The editors also caught the attention of John Appleby in his blog on monitoring.

memory.png

SAP HANA Studio - Memory Overview

 

 

Although you can add as many as 18 memory-related statistics to a table viewer in HANA studio, this is not the most intuitive as you can see below.

services.png

Memory statistics in SAP HANA Studio, Administration Console, Landscape, Services

 

To get any graphical display of resource usage over time, before SPS 7, you could use the administration console, performance - load tab. This viewer has not changed much (if any) from the BWA admin tool, which is still included, slightly modified, with SAP HANA (command HDB admin). You need good eyesight, good glasses or a big screen to view clearly as color and line style cannot be modified.

 

admin.pngload.png

Load graph from the SAP HANA database administration tool and SAP HANA studio.

 

Hence, the new web-based graphical editors Memory Overview, Resource Allocation and Memory Allocation Statistics are a welcome enhancement as they provide a much clearer and visually attractive display of monitoring data. They all use the SAP HANA XS built-in application server. You can view the new editors in action in this overview video on what's new with monitoring for SAP HANA SPS 7.

 

 

Monitoring SAP HANA: What's New in SPS 7

 

New Statistics Server

SPS 7 also introduces a new implementation for the statistics server. This statistics server is the component of the SAP HANA database that continuously collects data about system status, performance, and resource usage. It also issues alerts in the case of problems. As of SPS 7, it is possible to switch to a new mechanism whereby data collection and alerting are implemented through the execution of SQLScript procedures. This has two advantages.

  1. Improved performance - reduction of disk I/O and memory usage by replacing the OS process with internal procedure calls
  2. Increased configuration flexibility - statisticsserver.ini properties file is replaced by tables in the _SYS_STATISTICS schema, which allows for adhoc enable/disable and scheduling of collection

 

The following video discussed the new statistics server and how to enable it, as it is not active by default.

 

 

Monitoring SAP HANA: New Statistics Server

 

Performance Analysis Guide

New in the documentation set of SAP HANA SPS 7 is the Performance Analysis Guide, which focusses on performance analysis and monitoring. This guide describes what you can do to identify and resolve performance issues and how to enhance the performance of the SAP HANA database in general. Regarding monitoring, the guide describes the different tabs in the SAP HANA studio administration console under performance, like threads, sessions, blocked transactions, job progress and load monitoring.

 

Other documentation on monitoring can be found in the Administration Guide and the Technical Operations Manual. The video below provides a short overview.

 

 

Monitoring SAP HANA: Documentation

 

Monitoring Tools Overview

 

In case you are new to the subject of SAP HANA monitoring - or are preparing yourself for the SAP Certified Technology Associate exam - you may find this overview video helpful about the different tools you can use to monitor SAP HANA.

 

 

Monitoring SAP HANA: Tools Overview

 

System Views

 

More advanced is the topic of system views. SAP HANA contains hundreds of monitoring views in the _SYS._STATISTICS schema that you can use to write your own monitoring scripts in SQL. Getting started on this topic can be a bit of a challenge. However, there is a shortcut you could take. When you navigate through SAP HANA studio, going from tab to tab, enabling or disabling settings for the HANA database, what is actually being sent over the line is just plain SQL. In the video below, we will show you how you can capture the statements using SQL trace and how they can be used for custom monitoring scripts.

 

The resulting SQL script is attached to this SAP HANA site document.

 

 

Monitoring SAP HANA: System Views

 

Monitoring Role and Security

 

Both the administration and the installation guide clearly document not to use the SYSTEM user in production system.

 

caution.png

However, less clearly documented is how to setup a monitoring role that you can use to grant to database administrators. What about the built-in MONITORING role, you might say? That's a good start, it is true. It grants GRANT CATALOG READ, so you can use the administration console perspective in SAP HANA studio, and also grants SELECT on the _SYS_STATISTICS schema, so you can view data in System Monitor and the different tabs of the Administration editor. However, with only the MONITORING role granted, you will get an error when trying to open the new web-based graphical editors. What privileges are needed to handle a Disk Full Alert? Restart a service? Cleanup traces? This and more is explained in the video below:

 

 

Monitoring SAP HANA: Monitoring Role

 

Monitoring Checklist

 

The SAP HANA Administration Guide includes a helpful monitoring checklist. In the last video on monitoring SAP HANA, we show you how to check system availability, how to handle alerts, monitor memory, cpu and disk resource usage and much more.

 

As with the video on monitoring tools above, you might find this video tutorial helpful when you are preparing for the SAP Certified Technology Associate exam. Details on this certification you can find in the SAP Education and Certification Shop.

 

 

Monitoring SAP HANA: Checklist

 

To be continued...

You can view more free online videos and hands-on use cases to help you answer the what, how and why questions about SAP HANA and Analytics on the SAP HANA Academy at academy.saphana.com or follow us on Twitter @saphanaacademy.


10 Predictions: What's new for SAP HANA in 2014?

$
0
0

I've been keeping a quiet eye on SAP HANA for a while now... and I thought I'd showcase a few predictions for SAP HANA in 2014. 2014 will, presumably, contain a further two releases of SAP HANA, SP08 and SP09, which will as usual contain a raft of new functionality and apps. Here are some of my favorites. None of this is official product strategy!

 

SAP River

 

SAP River is a new, descriptive, development language based on SAP HANA. I've been using it and it is a fabulous way to build next-generation business applications: it is fast, simple and easy to use and I can build apps in minutes rather than days.

 

It was released with HANA SP07 into an early adopter program and from what I can see, it is likely to be released formally with HANA SP08. We will see some new features, performance improvements and there will be a big focus on usability. If you are considering building apps in 2014 then I highly recommend looking at the River platform.

 

Lumira on HANA

 

SAP Lumira was born as Visual Intelligence in January of 2012, and was renamed to Lumira in 2013 and completely rewritten to have a responsive design, which translates to being able to produce dashboards and visualizations that work on any device that supports HTML5.

 

The kicker is this: SAP Lumira Server is on its way, and this will be a version of Lumira that runs inside SAP HANA, as an app. It looks like it will allow for information exploration, like SAP Explorer, plus the ability to publish dashboards and visualizations into the HANA appliance, which will then run in-memory, on any device.

 

Lumira isn't formally tied to the HANA release schedule so I hope to see an early release in Q1, with a proper release later in the year.

 

HANA Graph Engine

 

Research papers have been written about the HANA Graph Engine and my guess is that the Graph Engine already exists in HANA, but it is disabled by default. The Graph Engine is the main missing piece in the HANA story - with this, you will be able to build almost any app.

 

My guess is that it will be used in the next generation APO Engine, because a Graph Engine is well suited to solving approximations of the Traveling salesman problem. I'm looking forward to this but I don't expect to see it until SP09.

 

HANA Complex Event Processing

 

SAP have a market-leading CEP Engine called the Event Stream Processor (ESP), which integrates into SAP HANA. However, this causes one serious business problem, which is how do I react to events based on information that happened in the past. Currently this requires a call-out to SAP HANA via a network socket, which is (relatively) inefficient.


I believe that SAP will write a CEP Engine based on ESP integrated directly into the HANA Appliance - this will completely differentiate SAP from everyone else in the Complex Event Processing market. I don't think we will see this until at least SP09.

 

Web Development Platform

 

There is a basic Web-based IDE in SAP HANA SP06, which has been enhanced in SAP HANA SP07. However, for serious HANA development it is currently necessary to use the Eclipse workbench, for which SAP built a plugin called HANA Studio. For HANA to become a serious application platform, it needs a web-based development platform on which you can build apps, including SAP River apps.

 

I believe that a first revision of this will come in HANA SP08, and it will be further refined in SP09.

 

SAP Magnet

 

Back to apps again for a moment. There was a blog written quietly last year about a project called SAP Magnet. This is an awesome project that consumes contextual information from your email and calendar, news sites, and pastes them within the Fiori Launchpad. So, you open up your iPad and it shows you your next meeting, your recent interactions with that person, their stock results and information from the Business Suite like outstanding invoices.

 

I doubt we will see a release of Magnet until late 2014 or early 2015.

 

Other Business Applications

 

What I hope to see in 2014 is a small collection of business applications based on the SAP HANA Platform. We have yet to see the capabilities of HANA to transform industries, like SAP R/3 transformed industry in the 1980s and 1990s.

 

Some of this will be based on SAP Business Suite on HANA and the existing apps being ported to HANA, but the platform is mature enough for very complex industry specific applications to be built, solving some of the world's hardest problems. I hope SAP put sufficient focus on this.

 

HANA as a Cloud Application Platform

 

All of this leads to the big ticket item - the Cloud Application Platform. SAP will sell variants of this - Infrastructure as a Service, Platform as a Service, on-premise, SAP Cloud, Partner Cloud. It will consume the existing HANA Cloud Platform, HANA Enterprise Cloud, it will be purchasable on demand and on a subscription basis or via a perpetual license (Bring Your Own License). It will allow web-based application development and delivery and will be awesome.

 

If SAP manage to do this in 2014 then I will be incredibly impressed. They have all of the components to do it and make it a success.

 

Predictive Analysis

 

I suspect that SAP will take the KXEN Infinite Insight app and build it directly into SAP HANA, like they have done with Lumira. They will hopefully then integrate this into HANA Live for application scenarios for line of business, and industry, within the SAP Business Suite.


I'd expect this to be a high priority and we may see a first version in the SP08 codebase, with major refinements in the SP09 release.

 

Competition

 

It's worth a major note - 2014 is the year when the competitors will arrive to the party. This will come from the traditional RDBMS vendors like Oracle, IBM and Microsoft. Anyone who reads my content will know that I believe that they fundamentally don't understand the point of SAP HANA, and have missed the mark with their offerings, but the power of their install base shouldn't be underestimated, nor should the power of their sales and marketing teams.

 

It will come from the analytic vendors like Teradata, Qlikview and Tableau as they build out their in-memory offerings.

 

And, it will come from the new database and noSQL databases like VoltDB, Hadoop and MongoDB. These guys have realized that noSQL doesn't work for high-performance apps and some are heading in a similar direction to HANA.

 

All of the other vendors are at least 2 years behind SAP HANA, but some of them have very deep R&D budgets. SAP is lucky that the very large vendors have big database revenue streams to protect, which means they face the innovators dilemma.

 

Final Words

 

In my opinion, 2014 will be the defining year for SAP HANA. The platform is by now mature enough for use in any business scenario, it will be available in any conceivable way you want to consume it, and the new features and functionality differentiate it from what else is on the market.

 

However, there are the challenges of competition coming to the market, plus the mounting pressure of moving to a subscription-based license model. This will no doubt ensure that 2014 is a memorable year.

6 Tips to avoid HANA Out of Memory (OOM) Errors

$
0
0

If you have spent much time with HANA at all, then you will have encountered the Out of Memory error.

 

[MEMORY_OOM]  Information about current out of memory situation: (2013-12-17 16:24:39 413 Local)

OUT OF MEMORY occurred. Failed to allocate 786815 byte.

 

This happens, because HANA requires the data on which you're operating, and all of the temporary space required to do aggregation and calculation, to be available in physical RAM. If there is not enough physical RAM available, your query will fail. Therefore you have three major factors:

 

- The amount of RAM available to the HANA system

- The volume of data on which you are operating

- The amount of temporary space required by the calculation

 

The first factor is fixed, and the second two are a factor of the design of your model (and also of concurrency, as each user will require their own temporary space). This is why SAP recommend you retain 50% of RAM for tables, and 50% for calculation memory. Indeed, this can vary depending on the system.

 

Fortunately there are several things you can do to reduce, or eliminate, Out of Memory Errors. As a bonus, every one of these things will help your overall application design and performance.

 

1) Upgrade to the latest HANA Revision

 

Newer HANA Revisions are always more memory efficient, both in how they store tables and how they process data. How much difference a particular Revision will make will depend on a number of factors, but for instance, if you use ODBC/JDBC connections and SQL for aggregations without using Analytic Views, Revision 70 may improve performance by 100x and temporary space by 100x compared to Revision 69.

 

In addition, Revision 70 seems to be a bit more frugal on memory, and it releases RAM back to the Operating System more aggressively than previous Revisions. Either way, take the time to update regularly - the HANA development team are constantly refining specific scenarios.

 

2) Migrate the Statistics Server

 

If you are on HANA SP07, you can now migrate the Statistics Server into the main Index Server process. You can read about this in SAP Note 1917938. This should save you around 2GB of memory, which, if you have a 16GB HANA AWS instance, will be much appreciated.

 

3) Never use row organized tables

 

HANA allows row- and column- organized tables, but you should never use row tables. You can double click on a table to check - it should say Type: Column Store. I discuss this in more detail in my blog 6 Golden Rules for New SAP HANA Developers, but the important thing for us here is that row organized tables are not compressed as well as column tables, plus they always reside in memory. You can't unload row tables, and they are a memory hog.

 

4) Set your tables not to automatically load

 

You can configure tables in HANA not to automatically pre-load on startup using: ALTER TABLE table_name PRELOAD NONE. This is good practice in development systems, because it means you can restart the HANA system faster (as tables won't be preloaded) and HANA uses less memory by default.

 

5) Partition large tables

 

HANA loads tables into memory, on demand, on a column-partition basis. I discuss this in some detail in this document How and when HANA SP7 loads and unloads data into memory. If you have a unpartitioned large table with a column with a lot of distinct values - I checked one of the test table in my system and found it had a 12GB column - then that entire column will be loaded into memory the moment any part of it is accessed.

 

If you partition a table by a sensible key (date, or an attribute like active/inactive), then only those partition-columns that are actually accessed will be loaded into memory. This can dramatically reduce your real-world memory footprint.

 

6) Redesign your model

 

HANA doesn't have a temporary table concept like in a traditional RDBMS like Oracle or MS SQL. Instead, temporary tables required for calculations are temporarily materialized in-memory, on demand. HANA has a lot of clever optimizations that try to avoid large materializations and a well-designed model won't do this.

 

There is a need for a new advanced modeling guide but the best one available for now is the HANA Advanced Modeling Guide by Werner Steyn. In particular, you should avoid using Calculation Views until you have mastered the Analytic View - always try to build an Analytic View to solve a problem first. This is because Analytic Views don't materialize large datasets, unless you materialize the whole Analytic View.

 

7) Make sure your Delta Merges are working correctly

 

Nice spot Marcel Scherbinek, I forgot to add this. HANA keeps a compressed "main" store of column tables and an uncompressed "delta" store, for new items. Periodically, a process called mergedog combines the two in a DELTA MERGE process. This is a set of design tradeoffs that ensures fast read, and fast writes, and good compression.

 

Sometimes however your tables may not be configured to automerge (you can issue ALTER TABLE table_name ENABLE AUTOMERGE), and occasionally mergedog fails and you have to restart HANA. This can cause the delta store to become very large, and because it is uncompressed, this is a big issue if you don't have much RAM.

 

If you double click on a table and click the Runtime Information tab, you will see two memory consumption numbers: Main, and Delta. Delta should always be much less than main during correct operation. You can right click a table any time and select "Perform Delta Merge".

 

Final Words

 

HANA requires that your working set of tables (those tables frequently accessed), plus sufficient calculation memory for the temporary tables materialized during your query, are available in-memory. HANA will unload partition-columns of tables on a Least Recently Used basis, when it is out of memory.

 

If you see OOM errors and you have done (1-5) then the chances are that your model is materializing a large data set. Because temporary tables in HANA aren't compressed, they can be 5-10x larger than the equivalent column table, which means that you can balloon large volumes of RAM. If you have a 512GB HANA system and a 50GB table, you can easily run out of RAM if you create a bad model.

 

It feels to me like there's a need for a new modeling guide for OOM errors for HANA. Maybe that's the subject of a future blog.

 

Have I missed anything? What are your tips?


In-Memory Computing Discussed on "Coffee Break with Game Changers" Radio

$
0
0

I was recently invited as a panelist for a discussion on In-Memory Computing on Bonnie Graham’s long-standing online radio show “Biz Buzz: Coffee Break with Game-Changers.” The show was less about technology and more about what it takes to be successful with “game-changing innovations” – notably, a lot more human creativity.

Today’s buzz: In-memory. Today, your recipe for a truly intelligent business requires a magic ingredient: data. Lots of it. But even if you’ve captured the best and fastest data in the world, you still need the right information culture to make use of it whenever, wherever, however you need it. Good news: Now, in-memory technology allows you to store that information in a flexible, fast-to-access, and affordable way. Ah, can the sweet taste of success be far off?

 

The experts speak:

 

  • Mike D’Urso, Deloitte: “’Keep it as simple as possible but no simpler’. When it comes to Information Technology, we all enjoy systems that are easy to understand and use.”
  • Holger Mueller, Constellation Research: “In-memory will change enterprise applications as we know them – but it’s no walk in the park.”
  • Timo Elliott, SAP: “The most sophisticated, easiest-to-use pencil in the world won’t turn anybody into Picasso. Same goes with Big Data.” Join us for insights on In-Memory Computing: Making Data Memorable.

You can access the show here: [Download MP3] [itunes] [Bookmark Episode]

Could you help out a comp-sci student from Germany?

$
0
0

Dear database experts,

 

my name is Anna and I’m a comp-sci student from Germany, currently in the process of writing my Bachelor’s Thesis on the use of mobile devices for monitoring and administrating RDBMS. I’m conducting a survey among database experts to get a better feel for what DBAs are already using today, what they’d like to use in the future and what devices and operating systems are most revalent among DBAs.

 

If you are currently a DBA, I would be absolutely thrilled if you could spare 5 minutes to complete the following survey!

https://docs.google.com/forms/d/1Q2d5cfLvD4TvCa979Sho1ro8seYll_AxA7W1pX1aUgk/viewform


The survey is of course completely anonymous!

 

Many thanks in advance and warm regards,
Anna

Measuring Word Use Frequency in Rap Song Lyrics

$
0
0

We've already seen on SCN how to gauge sentiment on movies using Twitter data.  An additional measure of sentiment can be gained from analysing song lyrics.  This has been already been done, to an extent, for rap music lyrics by the team over at the Rap Genius website.  You may well ask what possible value there could be in analysing rap song lyrics, and granted this is more of a novelty than a serious application, but as you'll see there is some interesting data there.

 

I'll say up front that this is not something that is currently implemented using HANA (to the best of my knowledge), but I believe it is worth highlighting it's existence to the HANA community because:

  • when you look at what's been done it very well could be done in HANA, which would then allow for some very easy improvements (more on this below)
  • because Rap Genius offer an API it would be very easy to integrate their data into a HANA or SAPUI5 application (more on this below)
  • it is a novel application, and may be of interest to those working with unstructured text analysis

 

How it Works

Go to the Rap Genius site and enter some words to be analysed.  The chart below is for various brands of sportswear: adidas, converse, nike, puma, reebok.

RapStats_adidasconversenikepumareebok.png

The above chart shows word frequency in songs since 1988 and suggests that Nike are currently regarded as being "cooler" than other brands, at least in the rap music world.  A big change clearly happened around 2000 when Nike rose to prominence, although they've slipped a little over the last few years, losing ground to Adidas.  Another example - the below chart is for: Twitter, Facebook, Google, Instagram, MySpace.

RapStats_TwitterFacebookGoogleInstagramM.png

The above shows the decline of MySpace and the rise of Instagram.  We can also group words together like in this chart:

RapStats_audibeemerbmwbentleylexusmerced.png

In the rap music world, it seems Bentley is now the most desirable luxury car, and Lexus has lost it's prime position.

 

A couple of limitations are apparent:

  1. No actual sentiment is present in the analysis.  The graphs just measure occurrences as a proportion of the overall population of words, they don't distinguish between someone singing about Nike being good or Nike being bad.
  2. No drill down to source data.  You'll notice there is no drill-down functionality to see what songs used the lyrics entered, you just have to take it on faith.

 

To be fair to the people behind Rap Genius, this stats tool is not their core product.  Their site is all about annotating lyrics, and this stats tool is an offshoot of having collected the source lyric data.

 

A Hypothetical HANA Implementation

The "crown jewels" of the Rap Genius web site is of course the source song lyric data, most or all of which has copyright implications.  On the Rap Genius site, the source lyric data is crowdsourced - people are encouraged to upload lyrics in order to earn points on the site.  Assuming we could somehow get the source lyric data, we could implement this in HANA and get "voice of customer" sentiment data pretty much for free, just like was done in the movie recommendation engine.  Once we'd got the lyric data in a table, we could make this call to derive sentiment data:

 

CREATE FULLTEXT INDEX ... CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

 

Using the Rap Genius API

Rap Genius offer an API and it is very easy to use.  If you wanted to get the data underlying the first graph image shown above (the one containing the data on trainers) you make this URL call: http://rap-stats-api.herokuapp.com/adidas-converse-nike-puma-reebok which returns some simple JSON data:

RapStats_APICall.png

This URL pattern could be called from some XS JavaScript running on the HANA backend, or the URL could be called from a SAPUI5 frontend.  The MAKit chart controls that come with SAPUI5 can easily be bound to a JSON data model, as shown in the SAPUI5 documentation.

SAP HANA: Using "Dynamic Join" in Calculation View (Graphical)

$
0
0

Hi Folks,

 

This blog is intended to share my experiences on using "Dynamic Join" in "Join" node of a Calculation View (Graphical).

Note: i have done this on HANA Rev 68.

 

Problem Description:

 

We will have some scenario's in which we would want the "system" to have intelligence on whether to do a join or not before aggregation depending on the "Query" it receives from the UI ( Front end).

 

Example:

 

Let us take a sample table data i.e employee with the following DDL and data as shown below:

 

CREATE COLUMN TABLE "EMPLOYEE" ("EMP NO" INTEGER CS_INT,  "EMPLOYEE NAME" VARCHAR(200),  "EMPLOYEE TYPE" INTEGER,  "GENDER" VARCHAR(10),  "AGE" INTEGER ,  "REGION" VARCHAR(10),  "SALARY" DECIMAL(18, 0))

 

Load the data into the EMPLOYEE Table with the below mentioned insert statements:

 

insert into "EMPLOYEE" values(1,'Sachin',1,'M',40,'APAC',50000);
insert into "EMPLOYEE" values(2,'Ganguly',1,'M',42,'APAC',40000);
insert into "EMPLOYEE" values(3,'Dravid',1,'M',40,'AMER',40000);
insert into "EMPLOYEE" values(4,'Laxman',1,'M',43,'AMER',40000);
insert into "EMPLOYEE" values(5,'Dhoni',1,'M',35,'EMEA',40000);
insert into "EMPLOYEE" values(6,'Sehwag',1,'M',36,'EMEA',30000);
insert into "EMPLOYEE" values(7,'Kohli',1,'M',23,'EMEA',20000);
insert into "EMPLOYEE" values(8,'Kumar',1,'M',22,'EMEA',10000);
insert into "EMPLOYEE" values(1,'Law',2,'M',24,'APAC',30000);
insert into "EMPLOYEE" values(2,'Eddie',2,'M',26,'EMEA',150000);
insert into "EMPLOYEE" values(3,'Paul',2,'M',23,'APAC',120000);
insert into "EMPLOYEE" values(4,'Howrang',2,'M',22,'AMER',60000);
insert into "EMPLOYEE" values(5,'Xiayou',2,'F',22,'AMER',8000);
insert into "EMPLOYEE" values(6,'Nina',2,'F',22,'AMER',70000);
insert into "EMPLOYEE" values(1,'Federer',3,'M',30,'APAC',1150000);
insert into "EMPLOYEE" values(2,'Nadal',3,'M',29,'APAC',5230000);
insert into "EMPLOYEE" values(3,'Djokovic',3,29,24,'APAC',5045000);
insert into "EMPLOYEE" values(4,'Murray',3,'M',24,'APAC',55650000);
insert into "EMPLOYEE" values(5,'Sampras',3,'M',44,'AMER',5660000);
insert into "EMPLOYEE" values(6,'Agassi',3,'M',45,'AMER',5056000);
insert into "EMPLOYEE" values(7,'Venus',3,'F',28,'AMER',9500500);
insert into "EMPLOYEE" values(8,'Serena',3,'F',29,'AMER',9507000);
insert into "EMPLOYEE" values(1,'Messi',4,'M',24,'APAC',510000);
insert into "EMPLOYEE" values(2,'Ronaldo',4,'M',28,'AMER',500);
insert into "EMPLOYEE" values(3,'Xavi',4,'M',30,'EMEA',5002300);
insert into "EMPLOYEE" values(4,'Beckham',4,'M',40,'EMEA',7850000);

 

Now Let us try to analyze the share of salaries earned by each "Employee Type" at "Region" level and "Country" level.

 

We would need to create a calculation view in which we shall use 2 "Aggregation" nodes of which one will not include "Employee Type" in the aggregation as shown below.

 

Let me name this Calculation view as "CV_DYNAMICJOIN_TRUE".

 

Aggregation 1:

 

Screen Shot 2014-01-07 at 8.15.37 PM.png

 

Aggregation 2:

 

Screen Shot 2014-01-07 at 8.16.20 PM.png

While adding "Salary" from this node , rename it as "Total_Salary".

 

Now let us join these "Aggregation nodes" with join between them as "Inner Join" and "Dynamic Join" set to "True"

as shown below:

 

Screen Shot 2014-01-07 at 8.19.11 PM.png

 

Now we need to create a "Calculated Column" i.e "Share_Salary" which gives us the salary shared at "Region" or "Country" level in the "Projection Node" as shown below:

 

Screen Shot 2014-01-07 at 8.22.47 PM.png

Now activate the view and let us analyze the results.

 

Now let us analyze the output:

 

Output:

 

Screen Shot 2014-01-07 at 8.30.33 PM.png

 

You can see in the above result that we get "Salary" at Region and Employee Type level but the "Total Salary" is at "Region Level" and the "Share Salary" gives the share of salaries for each employee type at Region level.

 

Case 2:

 

I have created the similar calculation view keeping Dynamic Join as "False".

 

Let us see the output for similar query to analyze the salaries of Employee Type at Region level below:

 

Screen Shot 2014-01-07 at 8.35.17 PM.png

You can see here , the Join is behaving like a "Static Join" at Region as well as Country level and gives the salaries at the same level. Which gives us the erroneous results and hence gives more records i.e 16 instead of 11.

 

Now let us analyze Viz plans to check how many records each aggregate node is giving for the above mentioned queries:

 

Case 1 ( Dynamic Join = 'True') :

 

 

Screen Shot 2014-01-07 at 8.27.49 PM.png

 

As you can see above we are getting 3 records from the left node. i.e we have 3 regions and we are getting "Total Salaries" for those 3 regions at "Region" Level and similarly we are getting 11 records from the right node i.e Region and Employee Type level.

 

Case 2 ( Dynamic Join = 'False' ) :

 

Screen Shot 2014-01-07 at 8.41.24 PM.png

 

As you can see above we are getting 7 records from the left node i.e Region and Country level. and similarly we are getting 16 records from the right node i.e Region,Country and Employee Type level.

 

So the above "Dynamic Join" helps to "intelligently" decide based on the query received from UI whether it has to join at Country level or not. Which makes our work as a modeller bit easier.

 

Hoping this blog helped you to understand the benefits of using "Dynamic Join". Please do let me know your feedback on this.

 

Your's

Krishna Tangudu

TechEd 2013 Replays on SAP HANA Cloud Integration and SAP Financial Services Network

$
0
0

In case you were not able to attend SAP TechEd 2013, the following replays of educational sessions (from Las Vegas, October 21-25, 2013) are now available. You'll gain valuable insight with these sessions on SAP HANA Cloud Integration and SAP Financial Services Network. Also several interview replays with SAP Mentors on cloud technology are available. All these links are for external use. Feel free for sharing.


SAP HANA Cloud Integration – An Update from Product Management
(POP104 Lecture, Speaker: Sindhu Gangadharan, 60 min)  Replay and Slides

 

Integration is the key to achieving the benefits of the cloud. SAP HANA Cloud Integration is an integration platform hosted in the SAP HANA cloud that facilitates the integration of business processes spanning across different departments, organizations, or companies. It enables for end-to-end process integration across the cloud and on-premise. It also contains a data service capability that allows for efficient and secure using ETL tasks to move data between on-premise systems and the cloud. In this session, you will understand these product capabilities and also see an end-to-end demo in action.

 

Business Network Integration – SAP Financial Services Network, Ariba
(POP204 Lecture, Speaker: Udo Paltzer, 60 min) Replay and Slides

 

In this session, we will take a closer look into business networks offered by SAP Financial Services Network and Ariba. SAP Financial Services Network is a new, innovative on-demand solution available today, and designed to simplify the electronic interactions between financial institutions and their corporate customers. SAP Financial Services Network provides a reliable and secure platform that consists of an on-demand integration service including a simplified and rapid connectivity approach. We will also provide information about what options you have to quickly leverage recent SAP cloud acquisitions, such as Ariba, with your SAP landscape. We will explain what project content and methodology are now available from SAP to support your projects in hybrid (on-premise and cloud) environments within only a few weeks, based on the Ariba Network with an SAP Business Suite integration example.

 

 

Access SAP TechEd 2013 replays for SAP's on-premise orchestration  and integration solutions, including SAP NetWeaver Process Orchestration, the B2B add-on, and SAP Operational Process Intelligence powered by SAP HANA.

 

Access SAP TechEd 2013 Online to watch all available on-demand replays of executive keynotes, lectures, demo jams, interviews, and other highlights.

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>