Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

SAP River: Looking at Entities, Associations, Actions, and Views

$
0
0

Want to:

 

Simply model your entities with various elements?

 

Create your own unique types for your elements?

 

Make associations between entities with an easy to use syntax?

 

Set actions with specific parameters using business logic?


Install JSON View to optimally see your metadata?

 

Create views to organize and order data by using either SQL or a RDL variation of SQL?


Screen Shot 2014-02-06 at 9.55.15 AM.png

 

View other tutorials on SAP River at the SAP HANA Academy.


SAP HANA Academy - over 500 free tutorial technical videos on using SAP HANA.


-Tom Flanagan

SAP HANA Academy

Follow @saphanaacademy


Dynamic Analytic Privileges Using Procedures in SAP HANA

$
0
0

The analysis authorizations in SAP HANA give control on how you can control the users while viewing the data in HANA Artifacts. SAP HANA provides several different layers of security from which you can benefit, and in this article we'll be looking at Analytics privileges. We'll examine how we can create dynamic Analytics privileges on HANA Artifacts and how they control the data for different users while they are accessing HANA Models from Studio or from Third party reporting tools.

Here we have analytic view AN_EMPLOYEE which contains Employee salary data based on region and Country wise

  1.jpg

We will restrict the user XXXX_TEST for APAC Region.

Here is the sequence of steps to achieve this

1)      Create Procedure in Repository level or Catalog level which will give dynamic Output to Analytic Privileges.

2)      Create Analytic Privilege based on Repository Procedure or Catalog Procedure 

3)      Assign the Analytic Privilege to User to restrict the data on HANA Views.


1)      Create Procedure in Repository level or Catalog level which will give dynamic Output to Analytic Privileges

     Procedure Rules:

  • DEFINER procedures
  • Read-only procedures
  • Procedures must have a predefined signature as follows:
    • No input parameter/s
    • For the “IN” Operator, only 1 output parameter defined as a Table Type with a single column
    • For all Unary Type Operators (EQ, CP, LT, GT, LE, GE), only 1 output parameter defined as a Scalar Type
    • For the Binary Type Operator (BT), only 2 output parameters defined as Scalar Types
      • CAUTION
      • This means you cannot use multiple BT ranges or multiple CP patterns in the same procedure. This can have an impact on the design of your solution, specifically when translating the Authorization Mappings in BW to appropriate filter conditions in HANA
      • Only the following data types are supported for output parameters:
      • Datetime types DATE, TIME, SECONDDATE, TIMESTAMP
      • Numeric types TINYINT, SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE
      • Character string types VARCHAR, NVARCHAR
      • Binary types VARBINARY


In this section we need to create the SAP HANA Stored Procedures which will select the attribute filter values from the Mapping table. The Stored Procedures will return values for an attribute which a HANA User should have authorization for.


   1) Creation of mapping Table in Catalog where it will hold the user name and Authorization values for Region


2.jpg

   2) Create a table for Output of filter values for store procedure .we will use this when we creating the catalog procedure.

 

          CREATECOLUMNTABLE"SCHEMA_TEST"."AUTH_INFO_FILTER" ("REGION"VARCHAR(20))


Repository Procedure:


Create a Procedure with following properties.

3.jpg

In the output we need to define the Output filters structure. In here we are defining the REGION as Output.


4.jpg

 

In above procedure SESSION_USER is the users who are accessing the views.

 

Catalog Procedure


Now we will create a catalog procedure where the Output values are displayed using structure AUTH_INFO_FILTER .The Output values are worked as a filter in here.

 

CREATEPROCEDURE REPO_CATALOG_PROCEDURE (

OUT VALTAB "SCHEMA"."AUTH_INFO_FILTER" )

LANGUAGE SQLSCRIPT

SQL SECURITY DEFINER

READS SQL DATA AS

 

BEGIN

VALTAB = SELECT"REGION"

FROM"SCHEMA"."AUTH_INFO" 

WHERE"USER" = SESSION_USER;

END;


Not it is visible in Catalog under your schema.


5.jpg

2)      Create Analytic Privilege based on Repository Procedure or Catalog Procedure 


Using Repository Procedure

Create analytic privilege using repository procedure which we have created earlier.

6.jpg

Assign the analytic privilege to user to view the restricted data on information model AV_EMPLOYEE

 

7.jpg

After assigning the analytic privilege now user having reading access to the analytic views.

8.jpg

Now if User views the data it showing only related to APAC Region.

 

9.jpg

Using Catalog Procedure


Assign the Catalog Procedure to Analytic Procedure by removing the existing repository procedure ( for testing I am doing this )

10.jpg

Now view the data using TEST User.

11.jpg

User is not able to see that data in that Analytic view as the procedure we have is not assigned to “_SYS-REPO”

Execute the below SQL to assign the Procedure to user _SYS_REPO.

 

Grantexecuteon"SCHEMA"."REPO_CATALOG_PROCEDURE"to"_SYS_REPO"withgrantoption

 

After that user will able to see the data in analytic view.

12.jpg

Licensing, Sizing and Architecting BW on HANA

$
0
0

I've had more than a few questions on BW on HANA Licensing and Sizing, and it seems that there isn't anything authoritative in the public domain. So here we go, but before we start...

 

Caveats

 

Architecting BW on HANA systems requires some care. First, database usage, number of indexes and aggregates, use of database compression, reorgs and non-Unicode systems all cause a variance in compression in the HANA DB. The best way to size a HANA DB is to do a migration.

 

In addition, you may choose to use the cold data concept, to archive/delete prior to migration or to use Sybase IQ for NLS. All of these will vary the amount that you need. And don't forget growth - you need to plan for data volume growth, and M&A activities or projects which may increase data volumes.

 

If you get this wrong with SAP HANA, then you may buy the wrong hardware. I've worked with customers who bought 3-4x too much, and customers who bought 3-4x too little, so please get expert advice.

 

In addition be careful when architecting HANA systems, whether you need Dev/Test/UAT, if you have a big system, will it be scale-out, will there be a standby node, and is there HA/DR? Where will you store backups and application servers?

 

So whilst this blog is intended to help and inform, the responsibility lies with you for getting it right. If in doubt, get the services of an expert. Now we've got that out the way!

 

What are the license models for BW on HANA?

 

It is possible to buy BW on HANA in one of two ways:

 

1) By the 64GB unit. As noted in this slide deck, this is EUR 60k per unit for up to 10 units, and then the price decreases with every additional 10 units you buy, and future licensing purchases are accretive and retroactive.

2) By Software Application Value. You pay 8% of your total SAP purchase price and SAP provide an unlimited runtime license for BW. This is also available at 20% including ERP on HANA.

 

As has been described before, BW on HANA is non-discountable, but you should always have a frank discussion about your overall license package with your Account Exec.

 

Note that this purchase covers you for all usage: Dev, Test, Training, HA and Disaster Recovery. The only time when you need anything else is if you want to build HANA Enterprise models, and in this case you may need a HANA Enterprise license.

 

Generally, the SAV licensing is much cheaper unless you are a large organization who has a lot of SAP software and a small BW. If you are a mid-size organization with a big BW, the SAV licensing can be 10% of the unit-based price.

 

How do I size BW on HANA?

 

There is an attachment to SAP Note 1736976 - Sizing Report for BW on HANA. This note contains some manual corrections, and then needs to be installed via SAP Transaction SNOTE. Ensure you run the latest version, because it is constantly updated. You can then run ABAP Report /SDF/HANA_BW_SIZING.

 

When you run this report, run it with and without future growth, and keep both sets of numbers. It will produce a text file. It will look like this.

Screen Shot 2014-02-06 at 8.00.54 PM.png

Now it is necessary to be careful when interpreting this report. In this case, no growth was assumed and it is a 120GB MSSQL database, which it suggests will be a 127GB HANA DB. The sizing tool tends to be conservative and over-size slightly, especially for small systems.

 

In newer versions of this tool it will tell you how many Medium (512GB) nodes you would need, or how many Large (1TB) nodes. This is a rule of thumb, use it with care.

 

Now ensure that you think about what you are sizing for. For instance, you may feel that you can archive or delete data. Now is a good time to do this, and if you look at the PSA and DSO Change Log sizes in this system below, a cleanup is definitely in order. Also, you can set some data to be "cold" in HANA and purge it from memory after the migration. You can remove this from the sizing if you like.

 

If you have a very large system (greater than 3-4TB of HANA required) then it may be cost-effective to use the IQ Near Line Storage (NLS). You can subtract any data that you can archive from NLS from your sizing, but be careful: the NLS software is only good for cold data that is not updated frequently.

 

How do I architect BW on HANA?

 

First, start by sizing your productive environment. Once you have this, you can decide the production architecture. In my case here I only need 160GB RAM, so I would buy a 256GB HANA system.


Once you require more than 1TB RAM then you will need to move to a scale-out HANA system. This is where customers often go wrong. Let's assume we use Medium (512GB) nodes and the sizing tool says we need 100GB for the row store and 1.5TB for the column store. The row store requires one master node, and the column store fits on the remaining nodes. This means that we need 4 active nodes, plus one standby node if we want high availability. That's 5x Medium (512GB) nodes for production.

 

Now we need to architect for disaster recovery, and we can take the same as production.

 

Now we can architect our test system. If our disaster recovery can be warm (i.e. take some time to start up in case of a failure) then we can share this with our test system. This may make sense if you want a production sized copy in test. Note that if you do not have a DR system you will need a dedicated test system. If you have a scale-out production environment, always ensure you have a scale-out test system for scale-out testing.

 

And now you need a development system. Normally I recommend copying the existing system, and one 512GB node should be sufficient unless development is a copy of production. Use common sense.

 

From here you can work with a hardware vendor for the best approach, but be careful - the hardware vendors often cut some items out to cut cost (or indeed add extra hardware to get a larger sale), and I've dealt with a number of customers who have been bitten by this and have had to buy substantial amounts of extra hardware. Ensure that your hardware partner has an upgrade policy for the amount of hardware you expect to need in the future, based on growth.

 

Final Words

 

My final word would be to make sure that you get good advice throughout this process, and sanity check it yourselves. With a regular database, if you size it wrong then you can add more RAM or disk at relatively low cost, and you will just sacrifice performance. With HANA, you will have overspent, or will have to spend a significant amount to change your HANA architecture. Depending on your design, this can be very inconvenient.

 

Your first HANA deployment is critical, because it will set the tone of sentiment in the business for HANA as a technology stack. Take the time to get this part right, and you will help your BW on HANA deployment on its way. Your project will appreciate you for it!

 

Thanks to HANA Distinguished Engineer Lloyd Palfrey for his input on this blog!

SAP HANA Live on the Rocks !!

$
0
0

SAP is moving paces ahead in the area of technology. As part of their offering SAP BSOH (SAP Business Suite on HANA), they have been providing SAP ERP running on HANA. We have been hearing a quite lot about HANA, and what it can do to your data etc.., but checking out SAP HANA Live, we have decided to take it for a ride.

 

 

In the traditional space, we had two spaces in the IT infrastructure OLAP and OLTP. Please read http://datawarehouse4u.info/OLTP-vs-OLAP.html for more info

HANA has the wonderful feature of in-memory computing. Technically it puts all your data in memory, so that it can process faster and also will allow you to bring out analytics. That technically means HANA database can work as an OLAP and OLTP Engine at the same time.

 

This wonderful feature will also mean that we can skip the shady part of BW and run my analytics on the same HANA DB. Also with a few SAP HANA Content package (SAP HANA Live), there were some views which bring data from your SAP Instance.

 

We had made this setup of ECC 6.0 EHP7 running with HANA Database.

 

Some transactions later here here is some data reports from SAP Lumira. All the reports are online and live. Need not worry about your ETL scripts schedules etc…

 

 

659x553xSAPHANALive2.png

 

654x553xSAPHANALive1.png

There are many views that are available with SAP HANA Live, that can be used and analyzed. Screenshot of SAP HANA Live Browser, which can be used to visualize further.

SAPHANALiveBrowser.png

8 Tips on pre-processing flat files with SAP HANA

$
0
0

Most hardcore HANA nuts find themselves at a Linux shell console a lot of the time? Data quality issues are endemic when dealing with data from external sources, and we find ourselves with 50-100GB data files on a HANA appliance which need cleaning up. Because HANA runs on Linux, it's often convenient to do this on the appliance.

 

So I thought I'd reach out to the world's leading HANA experts, the HANA Distinguished Engineers, to see what their hints and tips are. But first, a few of my own. I was lucky enough to

 

Get the files on the HANA appliance and fix them there

 

It's not supported in a productive instance, but for development, test, pilots and PoCs we often cheat. So copy the files onto the HANA appliance data drive or log drive and crunch them there. Just make sure you don't run out of disk space, because bad things happen if you do. You can take advantage of HANA's fast local disks and immense CPU/RAM power.

 

HANA only supports tab, comma, semi-colon and pipe delimited files

 

It's a bit inconvenient but HANA doesn't support other delimiters, even when you specify the delimiter in the load control file. So if you have files that are delimited some other way, you need to convert.

 

Think very carefully about what tool you use.


If you want to do a simple substitution of Ctrl-A (Start of Header) delimited files to CSV then there are several ways to do it:

 

tr '\01' ',' < input.dat > output.csv

sed -e 's/,/\01/g' input.dat > output.csv

awk '{gsub(",","\x01")}1' input.dat > output.dat


It might surprise you that the performance is very different. All only run in a single thread. awk runs in 22 seconds, tr runs in 32 seconds in my test and sed runs in 57 seconds.

 

One interesting trick is to avoid using the /g addition to sed. /g is always expensive because it matches all such strings.

 

sed -e 's/,/\01/' input.dat | sed -e 's/,/\01/g' >/dev/null

 

Actually runs 15% faster.

 

Useless use of cat

 

Cat is the friend of the lazy UNIX admin and it generally just adds overhead. For example, the following commands do the same thing:

 

There is even a page dedicated to the useless use of cat. Awesome.

 

Learn awk

 

My biggest piece of advice is to learn how to use awk, especially if you are processing fixed format files into CSV. Here is an example of an awk file to turn a fixed-format file with a datestamp and a text element into a CSV. This runs incredibly quickly!

 

#!/bin/bash

 

awk -v DATESTAMP=`date -d ${1:8:8} +%F` 'BEGIN{FIELDWIDTHS="2 2 2 3 10"}{

 

    HOUR=$1

    MIN=$2

    SEC=$3

    MSEC=$4

    TEXT=$5

    printf ("%s %s:%s:%s.%s,%s,%s,%s,%s,%s.%s\n", DATESTAMP, HOUR, MIN, SEC, MSEC, TEXT)

 

}' $1 >$1.csv

 

Download a few tools to help you on your way

 

I put a short document together that details my favorite HANA tools. I hope you enjoy!

 

Parallelize

 

If you use pigz, you will be parallelizing already. One of my favorite tricks is to write awk scripts and then run them on 20 files at once. Because these scripts are usually CPU bound, each will take up a single thread. With the example above we would do

 

ls *.fixedfile | parallel ./awk.sh

 

Go and make a cup of tea and you will have all your files extracted

 

Go and load your data into HANA in parallel

 

I put together a Best Practices for HANA Data Loads document a long time ago. I think it's still pretty relevant today, and will help you get data into HANA fast.

 

Final Words

 

None of this is good practice in a real productive HANA environment and you will probably have an ETL tool like SAP Data Services or a streaming tool like SAP Event Stream Processor. But when it comes to throwing together demos fast, these tips may get you out of trouble. I hope you enjoy!

HANA History Memory Usage Data

$
0
0

Many questions and discussions are floating around on how to check History Memory Usage used by Hana Database. Managed to set up RCA Hana DB analysis on SOLMAN 7.1 and did a comparison between Memory Overview + Resource utilization with Hana Studio rev7.

SOLMAN RCA (database analysis)

Once configure, we can easily obtain the History data by inputting the timeframe, as the data are stored into the BW infocube in solman.

Database -> Host Memory

    

Database -> Hana Service Memory

 

Database -> Resource Usage

Hana Studio Rev7

In Hana Studio Rev7, we can check memory usage info via Memory Overview (which provide present data) and Resource Utilization (you can set the timeframe but data is not in tabular format, you need to browse on the graph to obtain the usage data)

 

Memory Overview

 

Resource Utilization:

 

 

As we can see, RCA in Solution Manager organize history data into chart and graph, and data is in tabular format which provide us a clearer picture as compared to the data presented by Hana Studio. It is much easier and extremely useful for solution architect or administrator to forecast the memory growth in future.

 

Cheers,

Nicholas Chang

SAP River: Using Data Preview

$
0
0

In an informative video tutorial, Philip Mugglestone from the SAP HANA Academy showcases how to use the data preview feature when developing an SAP River application in SAP HANA studio.

 

(Note: Internet Explorer 9 or higher must installed on the workstation in order to use the data preview feature)

 

Data preview empowers users with a simple way to create, change, and remove data. Philip demonstrates the ease in which a user can test and work with actions using the data preview window.

 

Data preview is very useful for users who are developing or testing an application as they can work with actions without having to create OData calls from a UI or use Postman.

 

The data preview feature allows users to create data, work with data, test and call actions in a very easy and straightforward way while developing in SAP HANA Studio.


Screen Shot 2014-02-11 at 12.20.19 PM.png

 

View other tutorials on SAP River at the SAP HANA Academy.


SAP HANA Academy - over 500 free tutorial technical videos on using SAP HANA.


-Tom Flanagan

SAP HANA Academy

Follow @saphanaacademy

How to avoid NULL (?) values when using dimension attributes with fact table columns that contain blanks ( '' )

$
0
0

Purpose: Describe a method to address a common DW/BI problem of not having a matching row in a dimension for a given fact where the fact column is blank ('' ) whitespace. In general, we want to avoid returning null attribute values for a given entry in a fact. Just as a side note - this issue is not specific to HANA and can (and does) need to be addressed in whatever database your solution may be implemented in. This simply describes an easy way to solve using HANA based components.

 

Interesting to note, in SAP BW, master data tables/InfoObjects always have an entry with a blank row. If I had to guess, this is to ensure this behavior does not occur!

 

Real World Problem: User observes that while querying a given model in HANA, there is $300 in total sales by "attribute 1", with $100 of that falling into a null ("?") "attribute 1". When the user now implements a filter on "attribute 1", the null value is dropped and the $100 "disappears", which can cause some heartburn for the average user, "where did my sales go?!" The model is built with a fact and dimension, having a left outer join with a 1:n relationship.

 

Shouldn't this join type never drop data in the fact? The answer is no, even in a left outer join, when you apply a filter on the right table an inner join is effectively executed. If you have a fact table that has blank ( '' ) values in it, and no matching blank ( '' ) value in the dimension, then the inner join drops the key figure value from the result.

 

Generally, any ECC tables replicated with SLT will have tables with columns that are non-nullable with default values of 0 (for decimal) or ( '' ) for NVARCHAR, therefore if there is no value in a given column it will always be stored internally as blank ( '' ) or whitespace.

 

Observing the "problem"

 

CREATE TABLE "MOLJUS02"."TEST_WHITESPACE_FACT"
("MATNR" NVARCHAR(18),
"SALES" DECIMAL(15,2));
CREATE TABLE "MOLJUS02"."TEST_WHITESPACE_DIM"
("MATNR" NVARCHAR(18),
"ATTR1" NVARCHAR(18));
INSERT INTO "MOLJUS02"."TEST_WHITESPACE_FACT" VALUES ('1234567', 100);
INSERT INTO "MOLJUS02"."TEST_WHITESPACE_FACT" VALUES ('1238890', 100);
INSERT INTO "MOLJUS02"."TEST_WHITESPACE_FACT" VALUES ('', 100);
INSERT INTO "MOLJUS02"."TEST_WHITESPACE_DIM" VALUES ('1234567', 'BLUE');
INSERT INTO "MOLJUS02"."TEST_WHITESPACE_DIM" VALUES ('1238890', 'RED');
--Simulate a dim/fact query
SELECT B."ATTR1", SUM(A."SALES")
FROM "MOLJUS02"."TEST_WHITESPACE_FACT" A
LEFT OUTER JOIN
"MOLJUS02"."TEST_WHITESPACE_DIM" B
ON (A."MATNR" = B."MATNR")
GROUP BY B."ATTR1"
--Simulate a dim/fact query
SELECT B."ATTR1", SUM(A."SALES")
FROM "MOLJUS02"."TEST_WHITESPACE_FACT" A
LEFT OUTER JOIN
"MOLJUS02"."TEST_WHITESPACE_DIM" B
ON (A."MATNR" = B."MATNR")
WHERE B."ATTR1" <> 'BLUE'
GROUP BY B."ATTR1"

Result from first query

 

Result from second query - Where did my $100 go?! If I excluded Blue materials, I should still have $200 remaining!

 

Real World Solution: It's actually quite simple - you always want to have a matching record in your dimension for any possibilities that may exist in the fact. In some applications including SAP ERP, you can have transactions that have no value for a given column. In my example above, the real scenario occurred with CO-PA when there were transactions with Sales values that had no MATNR (Material) assigned, only a ( '' ), so it is certainly possible that this can happen in a production environment.

 

To solve the above, we simply need a record in the dimension with a ( '' ) value in the key(s). This would avoid any chance of null values occurring when using an attribute.

 

INSERT INTO "MOLJUS02"."TEST_WHITESPACE_DIM" VALUES ('', '');

Now, lets run the same queries above again and observe the difference.

 

( '' ) instead of null is shown

 

No longer dropping any sales since we now have a matching dimension, my data is back!

 

Now, the above is all good and fine - the solution is straightforward you are saying. I can just insert a record into the dimension tables required in each environment required, right? My answer would be, that's no fun - why would you manually do this when you can have HANA perform this itself? Let's create a stored procedure that does this work for us!

 

We need to create a stored procedure that inserts a whitespace row into every table in a target schema. In my case, this was an SLT managed schema, so some elevated rights are needed. To insert into an SLT managed schema, you either need to havethe role <SCHEMA_NAME>_POWER_USER (which is created when an SLT connection is made) and create the procedure as "Run With" Invokers Rights OR assign the same role to user SYS_REPO and choose "Definer's Rights" for the procedure.

 

- Procedure has input parameter to change which target schema to insert whitespace records into

- Read all tables of target schema that are not SLT or DD* tables, and also have non-nullable columns

- Omit any tables that have at least one non-nullable column, or if it has a nullable column and no default value also omit it.

- Only look at tables where the column in position 1 is a primary key

- Build a worklist of table/key to loop at

- Build a dynamic SQL statement to insert a whitespace record into each table in the schema.

 

Below is the source from SYS_BIC representation to show the parameter.

 

create procedure "_SYS_BIC"."sandbox.justin.insert_whitespace/INSERT_WHITESPACE_2" (  in process_schema NVARCHAR(12)  )
language SQLSCRIPT sql security invoker default schema "DE2_DH2" as
/********* Begin Procedure Script ************/
-- scalar variables
i INTEGER;
row_count INTEGER;
statement NVARCHAR(256);
BEGIN
--Work list
table_list =
--List of tables in the target schema that match criteria
--perform minus to remove any tables that have at least one
--nullable field or a nullable field plus no default value
SELECT DISTINCT "TABLE_NAME"
FROM "SYS"."TABLE_COLUMNS"
WHERE "SCHEMA_NAME" = :process_schema
AND "IS_NULLABLE" <> 'TRUE'
AND "DEFAULT_VALUE" is not null
AND "TABLE_NAME" NOT LIKE 'RS%'
AND "TABLE_NAME" NOT LIKE 'DD%'
MINUS
--List of tables that have at least one nullable column, or one
--non nullable column that has no default value
SELECT DISTINCT "TABLE_NAME"
FROM "SYS"."TABLE_COLUMNS"
WHERE "SCHEMA_NAME" = :process_schema
AND ("IS_NULLABLE" = 'TRUE' OR
("IS_NULLABLE" = 'FALSE' AND "DEFAULT_VALUE" is null))
AND "TABLE_NAME" NOT LIKE 'RS%'
AND "TABLE_NAME" NOT LIKE 'DD%';
--Use previous table list and find the column that is a primary key
--and is in position 1 for each table, in order to build insert statement
table_columns =
SELECT A."TABLE_NAME", B."COLUMN_NAME"
FROM :table_list A
INNER JOIN
"SYS"."INDEX_COLUMNS" B
ON (A."TABLE_NAME" = B."TABLE_NAME")
WHERE B."POSITION" = '1'
AND B."CONSTRAINT" = 'PRIMARY KEY';
-- store dataset size in row_count variable
SELECT COUNT(*)
INTO row_count
FROM :table_columns;
--Loop over result set, building and executing an INSERT statement for the required tables
FOR i IN 0 ..:row_count-1 DO    SELECT 'INSERT INTO' || ' "' || :process_schema || '"."' || "TABLE_NAME" || '" ' || '("' || "COLUMN_NAME" || '") VALUES ('''')'    INTO statement    FROM :table_columns    LIMIT 1 OFFSET :i;    EXEC(:statement);
END FOR;
END;
/********* End Procedure Script ************/

Happy HANA!

Justin


10 Golden Rules for SAP BW on HANA Migrations

$
0
0

This blog is the third in a sequence of blogs. It starts with Licensing, Sizing and Architecting BW on HANA and moves onto 10 Golden Rules for SAP HANA Project Managers. In this piece, I'm going to discuss some tips for making a migration a success from a technical perspective.

 

1) Create benchmarks

 

Always benchmark before and after. Make sure they are business-relevant, and are run in a place that excludes network and PC problems - the same place each time. Get the query variants and teach the technical team how to run and time the queries. Create an Excel workbook which has queries on the rows and query runs on the columns and run it every day after the migration. Now you can track project success.

 

2) Exclude unnecessary risks

 

I've seen so many projects that include unnecessary risks. Here are some examples

 

- Not doing full backups that allow proper restore points

- Putting installation files on network shares

- Having application servers on different networks to database servers

 

Find ways to remove these risks, you don't need them in your project.

 

3) Get your landscape in sync

 

Your landscape needs to be synchronized. Check your BW versions are part of a proper Support Package Stack and they aren't on wrong revisions.

 

On the HANA side, make sure you are on the latest revision of HANA. Yes, that means database and client and DBSL. A landscape that is not in sync is a landscape which is likely to fail.

 

4) Run an independent check of your HANA system

 

Unfortunately, hardware vendors sometimes mess up HANA installation and configuration. SAP have Note 1943937 which has details and there is an IBM-specific tool which checks for GPFS issues too.

 

Whilst you're there, check the trace folder of your HANA system. If it has a lot of logs, crashes or dumps then you have a problem. Get someone to resolve it before continuing.

 

5) Get the latest table sizing and table redistribution notes

 

There are SAP notes that apply fixes for table sizing and redistribution and these need applying prior to the export process. Search for the latest version and then implement them before exporting.

 

Specifically, check SAP Note 1921023 for SMIGR_CREATE_DDL and 1908075 for Landscape Redistribution. Also make sure you download the latest row store list from SAP Note 1659383 .

 

6) Get everything completely up to date

 

This is the #1 reason why I see problems in HANA migrations. You have to get everything up to date, especially SWPM, kernel, disp+work and r3load software. It requires manual work and you have to repackage the SWPM kernel files at times so you get the latest version. If you skip this or cut corners, you will pay for it later in the upgrade project.

 

7) Include the latest patches and notes

 

There is a common wisdom on SAP projects where people go to patch level N-1. This isn't the case with HANA, and you need to make sure you are on the latest patch during your project lifecycle. In addition, you need to search OSS for notes which need applying. One useful tip here is to use the SAP Note Search "Restrict by Product Components" --> SAP_BW --> Release 740 to 740 -> SAPKW74005 (for BW 7.40 SP05). You will now only see notes that relate to BW 7.40 SP05, which is neat.

 

Make sure you check SAP Note 1908075.

 

It's well worth spending time doing this analysis when you have some spare moments and noting the SAP Notes that you need to apply after the migration. If you don't do this then you will need to do it when you are tired, immediately after the migraiton.

 

8) Check the Important Notes list

 

The master note for BW 7.4 is SAP Note 1949273 and for BW 7.3.1 it is 1846493. You need to check and apply the corrections (and connected corrections) in these notes.

 

9) Follow post-processing steps

 

There are a number of post-processing steps detailed in the master upgrade guide. Amongst those, you should ensure you run ABAP Report RSDU_TABLE_CONSISTENCY to check for any problems, and refer to SAP Note 1695112 for more details.

 

10) Don't cut corners

 

You can't cut corners in a migration - you need to spend the time to get it right. I've been thinking lately, and SAP could really help make migration projects more successful by automating the above process. In the meantime, you need to be methodical and read all the available information and pay attention to the details. If you want your project to be a success then don't cut corners.

 

Final Words

 

The migration to SAP HANA can be a very smooth and simple process, if the technical team pay attention to the details. Every BW on HANA project I have seen in trouble has been either because of governance problems mentioned in my blog 10 Golden Rules for SAP HANA Project Managers, because of poorly architected hardware as per my blog Licensing, Sizing and Architecting BW on HANA or because of a technical team that didn't pay attention to the detail.

 

Sometimes you can get away with skipping some of the detail, but usually, you cannot.

 

A quick thank you to Sara Hollister, Scott Shepard and Glen Leslie  who provided some of these tips in an email to me, and to Lloyd Palfrey, who provided much of the rest.

 

What could be added to this list?

You got the time?

$
0
0
based on HANA rev. 70

 

World Time Zones Map

A question came up about timezone handling in SAP HANA.

As you might know SAP HANA offers functions to deal with local time as well as with UTC based time and provides the means to convert between those two.

 

But how does SAP HANA knows what time zones exist, what their offsets are and how get these information updated once the time zones change (and yes, they do change quite often as can be seen here: International Date Line - Wikipedia, the free encyclopedia)?

 

SAP HANA can draw these information from two sources:

1. Built in default values. These are hard coded, unfortunately undocumented values, that are used when no other information is available.

2. Time zone data stored in the TZ* tables in schema SYSTEM (and only there).

SAP note 1791342 - Time Zone Support in HANA explains this in more detail.


Concerning the current timezone: this is taken from the LINUX environment of the user that starts the indexserver if I'm not mistaken.

 

This setting, that has to be the same on every node in a scale out setup, can be reviewed in system table M_HOST_INFORMATION.

 

select * from M_HOST_INFORMATION where upper(KEY) like '%TIMEZONE%'

 

HOST  |KEY            |VALUE

ld9506|timezone_offset|3600

ld9506|timezone_name  |CET

 

There you go, now you know.

 

- Lars

Webinar: Achieve Instant Insight and Infinite Scale with SAP HANA and Hortonworks Data Platform

$
0
0

Date: Wednesday, March 12, 2014

Time: 1:00 pm EST

 

Speakers:

David Parker, VP of Big Data, SAP

John Kriesa, VP Strategic Marketing, Hortonworks

 

Join SAP and Hortonworks on Wednesday, March 12, 2014, for a Webinar to see how you can gain instant access with SAP HANA combined with infinite storage and data exploration with Hortonworks Data Platform.

 

During the Webinar, you’ll see a demo highlighting the power of SAP HANA real-time insights combined with the depth of insight that Hadoop can provide and see how these two powerful and complementary technologies provide enterprises with a future-proof modern data architecture.

 

You’ll also learn:

  • How Hadoop and SAP HANA are complementary technologies in the modern data architecture
  • How a data reservoir with Hadoop will deliver organizations the ability to gain deep insight
  • How SAP HANA, SAP BusinessObjects Business Intelligence, SAP InfiniteInsight, SAP IQ, plus the Hortonworks Data Platform can provide you a complete Big Data solution

 

Register Today!

Webinar: Understanding the HANA Difference - Lowering Cost of Data Management with In-Memory Technologies

$
0
0

SAP and SAPinsider cordially invite you to attend the third session in our Understanding the SAP HANA Difference Webinar Series. Join us for the next session in the "SAP HANA Difference" Webinar series, hosted by SAP and featuring Principal Analyst Noel Yuhanna

 

Date: Wednesday, March 6, 2014

Time: 11:00 am EST

Speaker: Noel Yuhanna, Principal Analyst at Forrester Research, Inc.

 

Data stored in memory can be accessed orders of magnitude faster than on disk. Although in-memory databases are not new, innovative distributed in-memory technologies, lower memory costs and automation are changing the way we build, deploy and use enterprise applications. In-memory databases support many use cases, including real-time data access, analytics and predictive analytics and other workloads. This Webinar focuses on trends on data management and in-memory technologies, and discusses how enterprises can save money with moving to in-memory and gain competitive edge.

 

The five-part SAP HANA Difference Webinar series is devoted to demonstrating the differentiating technical features of SAP HANA and highlighting what makes SAP HANA the pre-eminent in-memory database management system for conducting real-time business.

 

SAP HANA– with its unique ability to perform advanced, real-time analytics while simultaneously handling real-time transaction workloads – is both the present and future of in-memory database management systems. Explore the many features and capabilities of SAP HANA today and discover what makes this innovative solution different.

 

Register Today!

Data Replication to SAP HANA using DXC method

$
0
0

As we all know SAP HANA is one of the best and fastest appliances to get information on the fly but to do that it needs DATA. There are different techniques available to move the data into SAP HANA DB. In here we are going to discuss on, how we can load data to SAP HANA using Direct Extractor Connection (DXC).

 

SAP HANA Direct Extractor Connection (DXC) is providing foundational data models to SAP HANA, which is based on SAP Business Suite entities. Data in SAP Systems requires application logic to satisfy the Business Needs.SAP Business Content Data Source Extractors have been available for many years as a basis for data modeling and data acquisition for SAP Business Warehouse. Now, with DXC, these SAP Business Content Data Source Extractors are available to deliver data directly to SAP HANA.

A key point about DXC is it is batch-driven Process and in many use cases, batch-driven data acquisition at certain intervals is sufficient, for example, every 15 minutes.

 

Here is the key points to Choose DXC

  • Reduces complexity of data modeling tasks in SAP HANA significantly as data sends to HANA after applying all Business content Extractors Logics in Source System
  • Speeds up time lines for SAP HANA implementation projects
  • Provide semantically rich data from SAP Business Suite to SAP HANA
  • Reuses existing proprietary extraction, transformation, and load mechanism built into SAP Business Suite systems over a simple HTTP(S) connection to SAP HANA ( low TCO)
  • Requires no additional server or application in the system landscape(Simplicity)

 

Limitations for DXC

  • Business Suite System based on Net Weaver 7.0 or higher (e.g. ECC) with at least the following SP level:Release 700 SAPKW70021 (SP stack 19, from Nov 2008)
  • Data Source must have a key field defined Procedure exists to define a key if one is not already defined
  • Certain Data Sources may have specific limitations
    • Inventory types, e.g. 2LIS_03_BF –data requires special features only available in BW Certain Utilities Data Sources –can work with one and only one receiving system Some Data Sources are not delta enabled –not a specific issue for DXC or HANA, but something to take into account

 

From SAP NetWeaver version 7.0, SAP Business Warehouse (BW) is part of SAP NetWeaver, for example, ERP (ECC 6.0 or higher). This BW system is referred to as an “embedded BW system”. Typically, this embedded BW system is not used because most customers who run BW have it installed on a separate server, and they rely on that one. The default DXC configuration uses the scheduling and monitoring features of the embedded BW system but not its other aspects. DXC extraction processing bypasses the normal dataflow and sends data to SAP HANA instead. Below are the different configurations for SAP Business suite systems.

 

SAP Business Suite systems based on SAP Net weaver 7.0 or Higher

  • Example ECC 6.0  ( Using Embedded BW )

 

A.jpg

SAP Business Suite systems Lower than SAP Net weaver 7.0

  • Example ERP 4.6 (Using  Sidecar Approach , in this case BW needs to exist in your landscape)

 

B.jpg

Steps to Configure DXC

 

Enabling XS Engine and ICM Service

  • SAP HANA Extended Application Services
  • SAP Web Dispatcher Service

 

Setup SAP HANA Direct Extractor Connection

  • Implement Mandatory Notes as per Installation Guide.
  • Set DXC Connection in SAP HANA
  • Import Delivery Unit.
  • Configure XS Application server to Utilize the DXC.
  • Verify the DXC is Operational.
  • Define User and Schema in HANA Studio
  • Define http connection to HANA in SAP BW (SM59)
  • Configure the Data Sources in BW to Replicate the Structure to HANA defined schema
  • Load the data to HANA using Info package in SAP BW

 

Enabling XS Engine and ICM Service

Change the Instance Value to “ 1 ”  In xsengine Service in SAP HANA Studio Configuration tab to Enable XS Engine which will handle Control flow Logic. Refer the Installation Guide for the same.


C.jpg

Enabling web dispatcher service In SAP HAN Studio

Change the Instance Value 0 to 1 in sapwebdisp in SAP HANA Studio Configuration tab to enable the ICM Web dispatcher (it uses ICM Method to Load or Read the data from SAP HANA. This method Support Large Volumes of data. In BWA Also we use the same when Cube Having High volume of data.

D.jpg


Check the XS Engine Service

Access the XS Engine using below address in IE.

- http://<host name>:80<instance Number>

 

Set DXC Connection in SAP HANA

Download the DXC delivery unit into SAP HANA if it is not done along with SAP HANA installation. you can import the unit in the location /usr/sap/HDB/SYS/global/hdb/content .

     - import the Unit using Import Dialog in SAP HANA Content Node

Configure XS Application server to Utilize the DXC

     - Change the application_container value to libxsdxc ( if any value existed append it)

Test the DXC is Connection.

     - Check the DXC Connection Using below path in IE

     - http://<hostname>:80<instance Number>/sap/hana/dxc/dxc.xscfunc

     - It requires a user name and password to connect.

 

Define http connection in SAP BW

 

Now We need to create a http connection in SAP BW Using T-code SM59 .

Input Parameters

     -- RFC Connection = Name of RFC Connection

     -- Target Host = HANA Host Name

     -- Service Number = 80 <Instance Number >

 

In Log on Security Tab Maintain the DXC user created in HANA studio using basic Authentication method.

E.jpg

 

After that we need to test the connection

F.jpg

 

Embedded BW (or Sidecar) Parameters for HANA

 

Need to Setup the Following Parameters in BW Using Program SAP_RSADMIN_MAINTAIN ( T-code SE38 or SA38)

Parameters List in Program

G.jpg

PSA_TO_HDB_DESTINATION : Where we need to Move the Incoming data ( in Here we need to Give the value which we create in SM59) ( in                                                                                                                     here XC_HANA_CONNECTION_HANAS)

PSA_TO_HDB_SCHEMA :To Which schema the replicated data Need to Assign

PSA_TO_HDB : GLOBAL – To Replicate All data source to HANA

                          SYSTEM – Specified clients to Use DXC

                           DATASOURCE – Only Specified Data Source are used for

PSA_TO_HDB_DATASOURCETABLE : Need to Give the Table Name which having the List of data sources which are used for DXC.

 

 

Data Source Replication, Data Loading

 

Install data source in ECC using RSA5.In here we have taken data source 0FI_AA_20

( FI-AA: Transactions and Depreciation). Replicate the Meta data Using Specified application Component (data source version Need to 7.0 version. If we have 3.5 version data source we need to Migrate that . Active the data Source in SAP BW.

Once data source is activated in SAP BW it will create the following Table in Defined schema.

 

  • /BIC/A<data source>00 – IMDSO Active Table
  • /BIC/A<data source>40 –IMDSO Activation Queue
  • /BIC/A<data source>70 – Record Mode Handling Table
  • /BIC/A<data source>80 – Request and Packet ID information Table
  • /BIC/A<data source>A0 – Request Timestamp Table
  • RSODSO_IMOLOG - IMDSO related table. Stores information about all data sources related to DXC.

 

Now data is successfully loaded into Table /BIC/A0FI_AA_2000 once it is activated.


H.jpg

Where to find information on SAP BW on HANA migrations

$
0
0

I've been part of projects to migrate to BW on HANA recently and one of the things that I noticed was that resources can be fragmented and tricky to find. I thought I'd curate a list of places to go to find information. If I have missed something then please ping me so it can be added to here.

 

1) Best Practice Guide

 

Boris Zarske maintains a Best Practice Guide - Classical Migration of SAP NetWeaver AS ABAP to SAP HANA and this is a great place to start. It covers all aspects of a migration and should be in your toolkit because Boris is aggregating information directly from the development team.

 

However, it only covers classical migrations, and if you're doing BW on HANA then you should ideally be considering DMO.

 

2) Database Migration Option (DMO)

 

Roland Kramer maintains SAP First Guidance - Using the DMO Option to Migrate BW on HANA and this is the place to find out information about this. It is applicable to BW 7.0 and above and can help automate the upgrade and migration to SAP HANA. DMO doesn't work in every scenario, so make sure that it can do what you need.

 

3) Migration Cockpit & Checklist

 

If you go SAP Note 1909597 - SAP NetWeaver BW Migration Cockpit then you can install and configure program ZBW_HANA_MIGRATION_COCKPIT. This program runs on BW 3.5 or above, which is very cool.

 

In addition, as recommended by Ali S Qahtani, you should consider applying SAP Note 1729988 - SAP NetWeaver BW powered by SAP HANA - Checklist Tool, which provides program ZBW_HANA_CHECKLIST or ZBW_HANA_CHECKLIST_3x, depending on your version of BW. This is a pretty neat checklist and a presentation is attached to the note.

 

4) Architecting BW on HANA


I wrote a blog on Licensing, Sizing and Architecting BW on HANA. In addition, Marc Hartz' guide on SAP NetWeaver BW Powered by SAP HANA Scale Out - Best Practices is important if you have a large system, as is Marc Bernard's How NOT to size a SAP NetWeaver BW system for SAP HANA.

 

5) Managing your BW on HANA Project

 

I wrote blogs on 10 Golden Rules for SAP HANA Project Managers, and 10 Golden Rules for SAP BW on HANA Migrations. Hopefully they are useful for you.

 

Roland Kramer also wrote Three things to know when migrating NetWeaver BW on HANA, which is worth reading. This refers to SAP First Guidance Collection for SAP NetWeaver BW  powered by SAP HANA, which in turn refers to Implementation - BW on HANA Export/Import, SAP First Guidance - Using the DMO Option to Migrate and SAP First Guidance - SAP-NLS Solution with Sybase IQ. Wow, this is recursive documentation!

 

5) HANA Basis Reference Guide

 

Andy Silvey has written the awesome The SAP Hana Reference for NetWeaver Basis Administrators, which is a go-to guide on HANA Administration. It is well worth reading if you're a Basis consultant moving to the HANAsphere.

 

6) ABAP Post-Copy Automation

 

Michaela Pastor wrote a very handy blog about ABAP Post-Copy Automation, which is all about reducing the time to do system copies, and using the same ABAP source system for more than one BW system.

 

7) Some additional blogs

 

Sunny Pahuja's blog on Some points to remember for Database Migration to HANA  is very detailed though a little out of date.

 

Final Words

 

Since I've written this, I've realized that there is a lot of information out there, which may be overwhelming. I do encourage though, if you are planning a BW on HANA migration, that you take a look at this information before you build out your plan. You will be much better informed and I have no doubt that you will change your plan for BW on HANA.

 

Thanks to all of those that helped curate this, especially Thomas Zurek, Klaus Nagel, Boris Zarske, Roland Kramer, Lloyd Palfrey, Marc Bernard, Lothar Henkes.

 

If you have some content that I should link to here, then please let me know!

Cutting 50% of testing, implementation and change management costs of BI projects with HANA

$
0
0

The first in this series of blogs talked about why increasing BI productivity (i.e. cutting the time and cost of developing projects) - is so important, and gave some examples of customers who had already achieved this. The second part of the series looked at how to cut the requirements gathering, scoping and design phases of the BI project.

 

We saw in the second part that atypical BI development project plan might look like the following:

  • Requirements gathering / scoping = 20-30%
  • Design & development = 40-50%
  • Testing = 20-40%

… and then there’s a cost of about 30% change management every year. So how do customers cut development time, testing, documentation, implementation and rework?


  • The shift to Self Service: Traditionally, most analytics/BI teams report a balance of 80% reporting / 20% analytics; we often see a shift to 20% reporting / 80% analytics with HANA clients.

    BIcombined.png 
    Assuming that the glossary and semantic layers have already been developed, the additional development effort for self-service analytics is minimal, and the turn-around almost instantaneous.

  • The significantly simpler data model that is usually implemented with HANA also leads to simpler and faster development. – no physical instantiations of aggregates, rollups, summaries.
  • Multiple engines in a single platform: The ability for a single platform to handle different analysis types, including text, spatial, graph etc. results in eliminated steps in managing the movement of data to specialised engines, and gathering and harmonising multiple result sets from different systems for a single complex query. Data quality also becomes more assured, as all systems are working from the same atomic dataset, without differences due to time-lags in loading / refreshing.
  • Elimination of multiple staging areas: The ability to handle both OLTP and OLAP within the same platform allows data to be read and written at the same time without any impact on performance to end user. This means there is no need to create duplicate staging areas to support different time-zones or different scheduling priorities.
  • Development with full data sets: Typically, design and development are carried out with small subsets of data to ensure good performance and quick iterations. The outcomes of this development are presented to users, who conduct a first User Acceptance Testing (UAT) cycle. Changes based on this feedback are incorporated, after which the development environment is back-loaded with large data-sets. Further development is carried out, including additional development for performance-tuning, after which additional UAT is carried out. Given the performance of HANA, development can be done directly on full data-sets, and only one iteration of the testing cycle is needed.
  • Reduce testing time: The removal of performance optimisation components of the model results in less layers, less objects to test and less time to load, resulting in quicker testing cycles. Changes to the model are easier to implement due to their simplified structures and reduced / eliminated ETL processing. All this results in faster testing cycles, faster iterations, quicker changes as well as the ability to perform multiple test scenarios in shorter periods, thereby mitigating UAT effort and time to delivery.
  • Reduced documentation efforts: The reduced complexity of the system landscape with less servers, less software, less storage, simpler data models and eliminated physical instantiations translates into significantly simpler and less time-consuming documentation.
  • Reduced change management costs: All the reasons that requirements gathering, scoping, design, development and testing with HANA lead to shorter development time also lead to significantly lower change management costs. Over a 5 year period, organisations will typically spend 1.5x the original development costs on change management, so improving the effectiveness and productivity in this area has a huge payback when viewed over a 5 year period. If you can reduce the change management cost by just 30%, you will typically save an amount equivalent to half the total original development costs.


With all the time savings that can be achieved throughout the BI project cycle, from design to documentation and the on-going change management, it is easy to see why this is such a compelling reason to look at HANA not just in terms of its ability to transform business processes, but also consider HANA's ability to change the way that BI is delivered in organizations. Think through the implications to your business if you could deliver twice as many BI projects with the same team, or you could deliver them 50% ahead of schedule. If you're unsure about whether this is real, have a look at some of the customers who are already seeing these benefits :

Lexmark - http://scn.sap.com/community/business-trends/blog/2013/05/15/lexmark-cio-sap-hana-gave-us-the-ability-to-respond-much-more-quickly-to-business-requirements


My suggestions for you:

  1. Benchmark your BI process. What is the time needed to build and delivering a BI project? How does this compare to your peers? (SAP Benchmarking can help) – specifically, I would suggest theBusiness Intelligence and High Performance analytics surveys
  2. Look at a recent BI project plan. Which workstreams could be eliminated or accelerated with HANA?
  3. Look for an upcoming project that would benefit from the speed and agility that SAP HANA can offer.

How can you save 50% of the time and cost of requirements gathering, design and development of BI projects with HANA?

$
0
0

The first in this series of blogs talked about why increasing BI productivity (i.e. cutting the time and cost of developing projects) - is so important, and gave some examples of customers who had already achieved this. Here's another example of a customer who is seeing huge productivity improvements and increasing business user satisfaction:


kennametal.PNG

 

In this blog, we’ll take a look at how customers are achieving these productivity gains:

 

A typical BI development project plan might look like the following:

  • Requirements gathering / scoping = 20-30%
  • Blueprinting / Design = 25-35%
  • Development = 20-30%
  • Testing / rework / implementation = 15-20%


… and then there’s a cost of about 30% change management every yearAlthough requirements gathering and scoping + design represents 45-60% of the effort, it usually represents more 70% of the project in terms of elapsed time, just because of the difficulty of getting the right people in the room, capturing their needs and building a data model and structure that delivers against all the requirements. Anything that can help us streamline these two processes will have a significant impact on project delivery time and cost.


What are the big problems with this approach?

  • The requirements gathering and scoping requires a lot of investment from the business; the process is iterative and time-consuming.
  • The first time the business user sees anything is usually 15-20 weeks into a 6-month process. This is perceived as unresponsive (even if IT has been working flat-out).
  • By this time, some of the requirements have changed, and there is a further iteration, or even multiple iterations to catch up to new user requests. Some projects are obsolete before the user has even seen the first iteration.
  • The data models that power our BI systems are inflexible, and expensive to maintain and change.

 

What do the requirements gathering and design processes look like today?

 

This usually starts with a series of interviews between developers and business analysts or business users, and is one of the most time-consuming parts of the process. Business users are asked what datasets they would like to analyse. That’s the easy bit. Now, the business-person is asked for all the questions that they are ever likely to ask of that data. This is hard, especially as the businessperson doesn’t know what questions they will ask in future. This is especially true now, as business conditions and market dynamics change quickly and constantly.

 

These interviews are conducted with a number of different stakeholders, captured in word, printed out, piled up in stacks that cluster requirements. A huge, perfect (?) data model is built that will handle all the requirements. The modelling takes into account not just the requirements, but also all the artefacts needed to deliver acceptable performance, including indexes, aggregates, summaries.

 

What changes when you move to BI project build based on HANA?

 

Analytics vs Reporting: One of the first fundamental shifts is the balance between analytics and reporting. Traditionally, most analytics/BI teams report a balance of 80% reporting / 20% analytics. The combination of very high query performance against granular data without IT tuning (e.g. no index constraints), full data access (subject to permissions) and modern, easy to use analytics software shifts this to 20% reporting / 80% analytics. Assuming that the glossary and semantic layer have already been developed, the effort for self-service analytics is minimal, and the turn-around almost instantaneous.

 

Remaining projects easier and faster to scope:With HANA, data only needs to be physically stored at an atomic level, resulting in a very simple physical data model. All modelling on top of this data is logical, and all aggregates / roll-ups are calculated on demand. This means that you don’t need to know all of the requests of data up front; you can deliver against a small set of requirements quickly, deliver to the business and then iterate with the next set of requirements.

 

Does HANA impact Design, Development, Testing and Change Management?

HANA significantly accelerates these stages of the BI development process, and that will be the topic of the next blog in this series...



Many thanks to Henry Cook and Wilson Kurian for their contributions. This blog was originally posted in saphana.com

Delivering BI projects in 50% of the time with SAP HANA

$
0
0

What if you could deliver your BI projects in 50% or less time than you do with your traditional systems today? At the HANA Global Center of Excellence, we are lucky enough to be able to work with lots of customers who are seeing these benefits today, including:


Honeywell - http://scn.sap.com/docs/DOC-38529

Lexmark - http://scn.sap.com/community/business-trends/blog/2013/05/15/lexmark-cio-sap-hana-gave-us-the-ability-to-respond-much-more-quickly-to-business-requirements
Kennametal - http://www.saphana.com/docs/DOC-3648

 

In-memory technologies like SAP HANA have attracted huge attention in the BI community because of their ability to deliver response times that are hundreds, thousands and even hundreds of thousands of times faster than when using traditional databases. This is exciting and is changing the way we view BI, but I would argue that this speed is not the most important benefit that SAP HANA delivers…it is productivity.

 

Bl_Accelerator_project shrink.png

 

How important is productivity when compared to your total BI spend?
This will vary significantly from customer to customer (and I’d like to see your breakdown if you are willing to share), but here is a breakdown of costs that we have recently seen across a number of large customers where we have benchmarked total cost. It is clear that any improvement in productivity is the single biggest lever in managing your total BI TCO
.


BI_Accelerator_TCO_BI.PNG


How are we achieving this increased productivity?
We’ll address this in more detail looking at each step in the BI process (requirements capture, design, development, testing, implementation and change management, but here are some of the key drivers:

  • Reduction in layers. The speed of SAP HANA allows you to get rid of the layers in the BI architecture that were introduced to enhance performance, including aggregates and summaries. Each layer adds complexity, slowing down development, testing and change management.
  • Logical vs physical modelling. Building logical models instead of physical models allows you to both build and change models more easily, without having to unload data, make changes, re-load, re-index etc.
  • Iterative real-time BI development on entire data scope: Typically, BI development is done with small data subsets because of performance issues – this keeps your load and development times down. Organisations usually then conduct user acceptance testing with small subsets, then go back, reload larger volumes of data, and refine development and conduct a second round of user acceptance testing. This is eliminated with SAP HANA where you can develop with full sets of data. The high performance loading and reporting also means more iteration cycles can be run within a day, in some cases even dynamically when interacting with the business users.


Before we get into more detail of how we are achieving the productivity increases, let’s think first about what this increased productivity would give you:

  1. Lower FTE (full time employee) cost, which leads to ability to deliver more projects,
  2. Less time elapsed, which means cash-flow business benefits pulled forward
  3. Strategic (less tangible) benefits: Lower risk of obsolescence, first mover advantage, higher customer satisfaction
  4. And for delivery organisations… higher day rates, more margin on fixed rate contracts


The most obvious benefit of this increased productivity is that delivering the project costs less.

If you can cut 3 months out of a 6 month project with the same number of resources, whether internal or external, you will cut 50% of the project costs. If you have 10 people working on a project full-time for 6 months at a fully loaded cost of €100,000 a year, the project will cost 10 FTE x €100,000 / year x 0.5 years = €500,000. Cutting this by 50% for one project will deliver a saving of €250,000. If you can do this for 4 projects a year, that’s €1,000,000.Now, some people will argue that you will only get this savings if you cut the BI workforce by 50%. What we find with most of our customers is that the BI teams get to tackle more projects. Most CIOs and BI teams today have a long list of to-do’s. Most teams are stuck on the top 3, the tasks that are the most urgent but not necessarily the most important. Since the top 3 consume all their resources, they keep pushing the other tasks, which are often more important long term, more strategic, and probably more fun and rewarding (e.g. leveraging Big Data to improve customer churn, or delivering predictive models so that business users can do more scenario planning).

BI_Acceleration_MoreTasks.docx.png

The second obvious benefit of the increased productivity is the shorter elapsed time to deliver the BI project.

This often has even bigger implications than the FTE cost reduction. This can range from first mover advantage, faster time to compliance, less time for scope changes during the course of the project, less risk of the project being obsolete before it even sees the light of day.The big question is how to quantify the benefits of this shorter project elapsed time, and here is one approach that our customers have found helpful. Most big BI projects today have an explicit business case. Let’s assume that the BI project will deliver better customer segmentation, which will in turn improve up-sell conversion rates by €100k a month. If the project is delivered in April instead of July, the first year benefit will increase by €300k. Not only will you see the benefit sooner (quicker time to value), but the 3 months of additional benefit would have been lost forever in the slower traditional approach.


Benefits €k                                    JanFebMarAprMayJunJulAug
3 month project developmentTime spent developing100100100100100
6 month project developmentTime spent developing100100

 






Finally, the soft benefits…
The most often quoted additional benefit that our customers see is that the business stakeholder customer satisfaction increases dramatically. IT is perceived as more responsive, delivers more (and more strategic) projects. If the project is customer-facing, enabling self-service analysis of spend, the external customer satisfaction and net promoter score (NPS – the % of people who would recommend a product or service to friends and family) will increase.
Some implementation partners are worried about the consequences of this faster delivery time. Most however, see the opportunities that this delivers. For the same budget, they can deliver more projects and get more strategic in the account. From a competitive perspective, if they can deliver the same project in 50% of the time that their competitors need, there is a lot less negotiation with the customer about the day rates. Finally, if the time spent can be shifted to more value-added activities, partners are able to attract and keep better individuals.

 

In the next blog in the Series, we’ll start to take a look at the different phases of BI projects, and how SAP HANA enables customers to significantly cut the time and cost to deliver…

 

If you are a SAP HANA customer, and are seeing BI productivity benefits, I’d love to hear your story. Include contact details in your comments, and I’ll call you or alternatively you can reach me atAndrew.de.rozairo@sap.com or +44 7977 257299

 

(Many thanks to Henry Cook, Wilson Kurian and Bernard Kenny for their contributions to this blog, originally posted in saphana.com)

Physical vs Logical modelling = Mixed tapes vs. iTunes playlists

$
0
0

We were discussing the agility that HANA delivers with a partner recently, and searching for powerful metaphors for describing it to business stakeholders. There are a number of blogs around the benefits that agility bring, as well as the drivers for this agility in both early project stages like requirements gathering, design and development,  as well as later stages including implementation, test, documentation and change management.


mixedtapes.JPG


Although the speed and computational power enable some of this agility by delayering the BI process, a lot of the agility is also delivered by the use of logical instead of physical modelling. This is also a key differentiator, and one that deserves some more attention. I may be showing my age here, but we’ll use the analogy of creating mixed tapes vs. creating iTunes playlists…

 

The creation of a mixed tape was a labour of love… It involved many hours of recording from your vinyl records or for those of us who had dual-cassette tape decks, from tape to tape. You had to think carefully about the entire tape – the mood, the order of the songs, the length of each song and the total tape length. Once you had finished your masterpiece, changing it was very difficult. If you wanted to replace a song, you needed to find another song that was the same length or shorter (and if it was shorter, decide whether you could live with the blank spot in the tape). Reordering the tracks…. didn’t happen.


Now let’s consider the itunes playlist. You store a single copy of all the songs or tracks – your library of songs. To create a playlist, you just drag and drop into a logical container. You don’t copy the music – you still only have a single copy of the track on your itunes library. This is true no matter how many playlists use that same track.

 

tapes.jpg

 

 

If you need to, you can easily see which playlists include any particular track (let’s call that musical compliance). If you discover that you need to get rid of a track for any reason (like you have grown out of your ABBA or other embarrassing stage of musical development), you delete it once from the library.


What are the other tasks that become easier?

  • Changing the order of tracks
  • Duplicating playlists and then amending them
  • Mixing multiple playlists to create a whole evening’s worth of music
  • Having different ‘logical’ views of your tracks (alphabetical, by artist, by genre, by frequency of listening to)
  • Seeing which tracks are never played, and deciding if you want to keep them
  • …. Almost forgot documentation – writing down all the songs, in order, checking spelling, all in neat handwriting.  Any changes – tipex if you’re ready to compromise, a new written playlist if not.  


                                           tape_cover.jpgtipex.jpg


And of course, the building of a playlist takes minutes, rather than the 10+ hours that it used to take to create a great mixed tape. Which means that the time-to-value is much shorter, although we didn’t think of it in those terms - maybe it should be time-to-enjoyment or time-to-tunes. And to answer your next question, no, these are not my mixed tapes.

 

How far will this analogy go? Where are its limitations? I'd like to hear your thoughts either way...

 

Many thanks to Brian Raver, who introduced me to the metaphor. Blog originally posted on saphana.com

SAP HCI Blog Series: Integration Platform as a Service (iPaaS)

$
0
0

In the first introductory blog available via the link below;

(http://scn.sap.com/community/developer-center/cloud-platform/blog/2013/06/24/sap-moving-integration-to-the-cloud-sap-hana-cloud-integration-platform);

I highlighted the power of SAP HCI and its functionalities. It is with pleasure I am updating the community on the technical aspects of getting hands-on experience with SAP HCI. I am starting a blog in which I will describe all the necessary steps needed to set up the appropriate environment and the technical implementation required to get started with SAP HCI.

Part 1; Understanding the Power of SAP HCI: Getting Started

In this first blog, I will describe the necessary steps needed to set up SAP HCI environment for modeling and configuration;

SAP HCI Workbench and Tools

To get started, this section describes the tools to set up SAP HCI.

Eclipse: The first tool necessary is Eclipse Juno 4.4.2, this contains the integrated development environment – IDE and necessary perspectives that supports SAP HCI development. Other eclipse platforms currently do not provide support for SAP HCI. Eclipse Juno can be downloaded from the link below.

http://www.eclipse.org/downloads/packages/release/juno/sr2

SAP HCI Plugins:  The appropriate plugins to be installed in Eclipse Juno can be accessed from the link below. This contains the necessary artifacts for creating, designing and for monitoring Integration Flows (iFlow).

https://tools.hana.ondemand.com/#hci

Certificates and security Artifacts:Designing and deploying integration flow on SAP HCI tenant requires security and, depending on the type of scenario, different security artifacts. To generate SSL (Certificate Authority, Private Key, Public Key) and SSH (Private and Public Keys), the links below are also useful.

SSH Key pair (Private and Public):

http://wiki.sdn.sap.com/wiki/display/XI/Generating+SSH+Keys+for+SFTP+Adapters+-+Type+1

NB: This is specific for a sFTP scenario

SSL CA, Private, and Public Keys (SAP Logon required) using SAP Passport CA:

A link to retrieve the certificates (SAP Passport CA) can be accessed if you or your organization has the S-number to access SAP Marketplace.

NB: Other types of CAs are also possible apart from the SAP Passport CA.

Since SAP HCI currently supports three connectivity options (SOAP, IDOC, SFSF and SFTP), the security artifacts are pretty much the same, but there are different options of certificate authorities presently supported by SAP which could be used to sign the SSL certificate. SAP provides an overview of the supported CA’s in the current release of SAP HCI.

Tenant Account: A tenant account is required which is provided by SAP, this provides the tenant ID and the operations server to be configured in eclipse on which the Integration Flows (iFlows) will be deployed on. For more information about how to apply for a tenant account, please contact your SAP representative or send me an e-mail.

SOAP UI 4.0.1:SOAP UI is a lite weight tool to test the scenarios. How to use this tool is described in Part 3 of this blog. To download SOAP UI, use the following link:  http://sourceforge.net/projects/soapui/files/

This conclude the first part of this series of blogs. I just went through the requirements necessary to set up a modeling platform for SAP HCI scenarios. In the next part we will describe how to implement a SOAP2SOAP scenario via SAP HCI.


Written By; Abidemi Olatunbosun, Rojo Consultancy BV, The Netherlands

Contributor; Geert van den Reek, Rojo Consultancy BV, The Netherlands

SAP HCI Blog Series: Integration Platform as a Service (iPaaS) Part 2

$
0
0

Part 2; Understanding the Power of SAP HCI: SOAP2SOAP Scenario Design Time


In part one of these series; I discussed how to set up a modeling environment for creating iFlows and other artifacts in SAP HCI in the eclipse-based modeling environment. In this part 2, I will describe on a high level how to implement a simple SOAP2SOAP scenario. In this blog, SOAP UI will be used to trigger our client web service. So, let’s get started.

Modeling in eclipse is done in two perspectives:

Integration Designer: Creating iFlows, mappings, importing WSDL files and other existing SAP PI artifacts from the ESR, configuring communication channels and security artifacts.

Integration Operations: Deploying iFlows, monitoring the runtime.

 

Adding Operations server for Deployment, SAP PI Server for importing SAP PI artifacts

Launch eclipse, go to Windows à Perspectives à SAP HANA Cloud Integration and enter the necessary details to connect to the operations server (tenant) and Enterprise Service Repository for importing SAP PI objects and artifacts such as Message mappings, operation mappings, WSDL.

operations server.png

Create iFlow:

  1. Switch to Integration Designer Perspective: Go to Window àOpen perspective àOther à Integration Designer

 

iflow.png

To create an iFlow, first create an integration project, click on the project explorer palette, select New à Other à SAP HANA Cloud Integration à Integration Project

integration project.png

Follow the wizard through and the integration project will be created, follow the steps above to create an Integration Flow in the Integration Project and the following screen is shown:

hci workspace 1.png


Generate Security Artifacts

This section describes the security Artifacts that will be necessary to configure the sender and the receiver systems for security. Follow the steps described below;

1) SSL Certificate CA, Public and Private Keys

  • download certificate  e.g SAP Passport CA from SAP link (using S number and password) via Firefox; [1]
  • download the Private Key from this certificate following the steps listed in this blog; [2]
  • save the Private Key in a folder (to be loaded in SOAP UI through SSL Settings)download certificate SAP Passport CA from SAP link (using S number and password) via Internet Explorer, [3]
  • download the Public Key from this certificate following the steps listed below;
  • save the Public Key in a folder (to be exchanged with partner e.g SAP).
  1)internet options - 2)content- 3)certificates- 4)personal- 5)export

2) SOAP UI Settings: Load the Private Key into SOAP UI through the following steps

  • file preferences SSL Settings, browse to your Private Key (downloaded from step 1 ).

    3) Deploy Java Key Store

  • Java Key store containing the CA and Public Key from the HTTPs web service should be deployed as an artifact on the operations server. This is a step that is currently performed by SAP.

    4) Set up SOAP UI

Call the service exposed via SAP HCI using SOAP UI. The endpoint to be used in your SOAP UI can be constructed in the following manner: https: //<operations server name>/cxf/<path defined in the sender channel>.

    5) Monitoring:

  Check in SOAP UI to be sure the message is sent successfully and monitor it through the SAP HCI Eclipse IDE, Integration Operations perspective.

soap ui 1.png

After all the security artifacts have been made available, the configuration for the systems to communicate will be done. The details about the configuration steps are described in part 3 of this blog. At this point you are able to configure a simple SOAP2SOAP scenario. If you have questions thus far, please do not hesitate to contact me.


Written By; Abidemi Olatunbosun, Rojo Consultancy BV, The Netherlands

Contributor; Geert van den Reek, Rojo Consultancy BV, The Netherlands

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>