Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

SAP HANA: Dynamic Ranking using Script based (SQL Script) Vs Graphical Calculation view Vs Script based (CE Functions)

$
0
0

Hi Folks,

 

In this blog, i will be explaining on how to achieve "Custom" Ranking using Graphical Calculation View.

 

Problem Description:

 

We have to show Top '2' ( Depends on the input from customer) Employees by "Salary" wise for each "Region" .i.e Top 2 employees in each region who are earning more salaries

 

**Note: if any body has the same salary they will share the same rank and they get ranked on the order of highest salary.

 

Example: Let us take the same table (Employee table) i have used in my previous blogs

 

SAP HANA: Using "Dynamic Join" in Calculation View (Graphical)

 

Using Script basec Calculation View using SQL Script:

We can achieve this very easily using window functions using Script based Calculation View as shown below:

 

SELECT "REGION","EMPLOYEE NAME","SALARY" FROM

(SELECT "REGION","EMPLOYEE NAME","SALARY",RANK() OVER (PARTITION BY "REGION" ORDER BY SALARY DESC) AS RANK

FROM EMPLOYEE)

WHERE RANK <= 2 ( we can use input parameter to make this dynamic)

 

Have achieved the same intended result using Graphical Calculation View as well.Please note that we cannot use "Rank" function in the graphical calculation view ( atleast in the current version it is not supported )

 

Using Graphical Calculation View:

 

Please find the snapshot of the model created below:

 

Screen Shot 2014-01-09 at 6.46.30 PM.png

 

Output:

 

 

Capture.PNG

 

As seen above, based on the input you give ex: 2, you will get Top 2 Employees ranked based on their salaries.

 

Using Script based Calculation Views using CE Functions:

 

As it would be difficult to show all the nodes and their screenshots , i have recreated the same logic using "CE" functions as shown below:

 

/********* Begin Procedure Script ************/

BEGIN

t = CE_COLUMN_TABLE(GBI_TD_OPS.EMPLOYEE);

 

t1 = CE_PROJECTION(:t,[CE_CALC('1',int) as dummy, "REGION","EMPLOYEE NAME", "SALARY"]);

 

t2 = CE_PROJECTION(:t,[CE_CALC('1',int) as dummy, "REGION","EMPLOYEE NAME" AS "EMPLOYEE NAME1", "SALARY" as SALARY2]);

 

t3 = CE_JOIN(:t1, :t2, [dummy,REGION]);

 

t4 = CE_AGGREGATION(:t1,[sum(dummy) as "count"],[dummy,"REGION"]);

 

t5 = CE_PROJECTION(:t4,["REGION","count",dummy]);

 

t6 = CE_JOIN(:t5, :t3, [dummy,REGION],["count",dummy,"REGION","EMPLOYEE NAME","EMPLOYEE NAME1","SALARY","SALARY2"]);

 

t7 = CE_PROJECTION(:t6, ["REGION","EMPLOYEE NAME","EMPLOYEE NAME1", "SALARY", SALARY2,dummy,"count"],'"SALARY2" <= "SALARY"');

 

t8 = CE_AGGREGATION(:t7,[count("EMPLOYEE NAME") as rank1],["REGION","EMPLOYEE NAME", "SALARY","count"]);

 

t9 = CE_PROJECTION(:t8, ["REGION","EMPLOYEE NAME", "SALARY", rank1,"count",CE_CALC('"count"-"RANK1"+1',int) as rank]);

 

var_out =  SELECT REGION,"EMPLOYEE NAME" AS "EMPLOYEE_NAME",SALARY,rank as "rank" FROM :t9;

 

END /********* End Procedure Script ************/

 

 

Hence we saw the 3 ways in which we can get the ranking i.e Calc Script using SQL Vs Calc Graphical Vs Calc Script using CE functions.

 

Please do let me know your feedback on this .

 

Your's

Krishna Tangudu


Video: Automating SAP HANA System Installation

$
0
0

With the newest SAP HANA lifecycle management installation tool, hdblcm, server and component installation can be simplified and optimized using installation automation. Automation distinguishes itself from a standard installation by making use of batch mode alone, or in combination with a configuration file, which can be used to pass saved parameter key-value pairs to the hdblcm installer.

 

For more information, refer to the following video and the official SPS 07 SAP HANA Server Installation Guide.

 

Welcome to the new SAP Operational Process Intelligence community on SCN

$
0
0

We are happy to announce the new SAP Operational Process Intelligence space designed to facilitate our developers and business users to interact and help us validate requirements, share use-cases and success stories, as well as webcasts, how to guides, demos and best practices.

 

We invite you to follow the new space and share your OPInt experiences with other contributors.

 

Quick overview of SAP Operational Process Intelligence:

SAP Operational Process Intelligence opint is a HANA based application providing real-time, actionable intelligence into your business processes spanning SAP Business Suite, SAP Business Workflow, and SAP NetWeaver Process Orchestration enabling decision makers and process analysts to respond to exceptional situations and manage business transactions. Supporting 3rd party systems with the inclusion of Sybase ESP in the SAP’s Operational Process Intelligence 1.1 release and will further expand the decision making role of the information work by introducing task management natively with SAP HANA.

 

SAP Operational Process Intelligence helps line-of-business users to gain process visibility across their end-to-end business processes with clear focus on helping them achieve business outcomes. Improved process visibility immediately results in superior operational responsiveness and better tactical decisions in day-to-day business operations.

 

As of SAP NetWeaver Process Orchestration 7.31 SP09 | 7.4 SP04, SAP customers can now co-deploy SAP Operational Process Intelligence opint and SAP NetWeaver Process Orchestration on the same HANA system and leverage combined process execution and process intelligence. Read more in this blog.

 

Where to start:

SAP NetWeaver BPM is already available on SAP HANA and you can now leverage the power of SAP NetWeaver Process Orchestration and SAP Operational Process Intelligence to define performance-driven applications with built-in process visibility and integration. With this TechEd 2013 session Thomas Volmering and Harsh Jegadeesan provide an introduction to the solution and explain the road ahead.

 

Join us also on social media:

SAP NetWeaver Process Orchestration on LinkedIn

@SAPTechnology and @SAPHANA on Twitter

SAP Technology and SAP HANA on Facebook

SAP HANA: User-Defined Exception Handling

$
0
0

Hi Folks,

 

I was going through a basic requirement which i received today and thought of sharing the same with you.

 

Problem description:


If we have any select on a table which has ‘0’ records, instead of not displaying any data in the output we need to display “No Data Found” as the Output


 

/* Creating a Procedure for testing User-Defined Exception Handling */


CREATE PROCEDURE EXCEPTION_HANDLING AS BEGIN

DECLARE CountVar INT; /* Variable to count number of records in the table */
DECLARE CUSTOMCONDITION CONDITION FOR SQL_ERROR_CODE 10001;
/* Custom Error Code = 10001*/


/*User Defined exception handler */


DECLARE EXIT HANDLER FOR CUSTOMCONDITION

SELECT ::SQL_ERROR_CODE AS "Error Code", ::SQL_ERROR_MESSAGE AS "Error Message" FROM DUMMY;


/* To check if the count = 0 to raise “no data found exception” */

SELECT COUNT (*) INTO CountVar FROM EXCEPTION_TEST;


IF CountVar  = 0

THEN
/* Signaling Custom error and setting the Custom Error Message */
SIGNAL CUSTOMCONDITION SET MESSAGE_TEXT = 'No Data Found Exception';


END IF;


END;



CALL EXCEPTION_HANDLING;


/*OUTPUT which shows the userdefined error code along with userdefined error message as shown below*/


Screen Shot 2014-01-15 at 10.09.07 PM.png



**Note:Please note that i have used the above example only to illustrate the "User Defined Exception" with the example.


The best way to check if the table is empty or not is by using "Exists" clause like the below statement:


SELECT 1 INTO CountVar FROM DUMMY WHERE EXISTS (SELECT 'X' FROM "EXCEPTION_TEST");


which if we use and if table is "Empty" the exception gets caught by SQLEXCEPTION with error code = 1299


Screen Shot 2014-01-15 at 10.13.48 PM.png


Do let me know your views and let me know any-other better alternate ways which you might have used.


Your's

Krishna Tangudu

 

Building Multi-Dimensional Business Applications on the SAP HANA Platform Versus Traditional Platforms

$
0
0

Sign up for our Webinar on January 30 to discover what makes SAP HANA the pre-eminent in-memory database management system for conducting real-time business

 

 

Date: Thursday January 30, 2014

Time: 11:00 am

Speaker: John Appleby, Global Head of SAP HANA, Bluefin Solutions

 

Join us for the second Webinar in our SAP HANA Difference series featuring John Appleby, global head of SAP HANA from Bluefin Solutions. You’ll hear him compare and contrast the dimensions of the SAP HANA platform with traditional platforms. He’ll also highlight major differences to help you evaluate requirements for building next-generation applications.

 

This session will help you understand how the SAP HANA platform can be used to build apps based on real-time data, which combine:

  • Transactions
  • Analytics
  • Text
  • Sentiment
  • Spatial analysis

 

You’ll also see a firsthand demonstration of real-time business with a retail analytics app.

 

Don’t miss this informative Webinar – the second in our SAP HANA Difference series

 

Register Today!

Configure ABAP to HANA SSL connection

$
0
0

I am working on a project where one of the requirements is to encrypt the traffic between the CI and the HANA back end DB. This is sort of documented in section 4.3 of the HANA Security Guide (http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf), but it still took me some time to figure out. I understand the next version of the security guide will have more detailed instructions, but thought I'd share some details that may help others in the meantime.

 

The below instructions are based on sapcrypto. In SP7, there is an option to use commoncrypto. OpenSSL is also an option if sapcrypto is not installed.

 

  • Install sapcrypto on both CI and HANA systems
    • This is well documented, so I won't provide details here
    • Copy libsapcrypto.so to .../lib directory
      • cp libsapcrypto.so /usr/sap/<sid>/SYS/global/security/lib
  • Create PSE files for both the CI and HANA systems
    • See 1718944 - SAP HANA DB: Securing External SQL Communication (SAPCrypto)
    • If a Certificate Authority (CA) is not available, SAP provides an option to create a test cert that is valid for 8 weeks: https://websmp110.sap-ag.de/tcs
      • This option can be used to sign the sapcli.req from Note 1718944
    • In my case, the customer created a PFX file using their own CA
      • This requires a conversion of the *.PFX files provided by customer to PSE using command below
        • sapgenpse import_p12 -p sapcli.pse <existing_cert>.pfx
    • copy sapcli.pse to sapsrv.pse
        • cp sapcli.pse sapsrv.pse
    • sapsrv.pse is required for server authentication – HANA DB
    • sapcli.pse is required for client authentication – CI ABAP system
      • Even though only the above files are required on the respective systems for our scenario, it is easy to create both pse files on both systems.
  • Enable SSL on HANA
    • su to <sid>adm
    • Create $SECUDIR
      • mkdir -p $SECUDIR
    • Copy both pse files to $SECUDIR
      • cp sapcli.pse sapsrv.pse $SECUDIR
    • Restart the HANA DB to enable SSL
  • Configure CI to connect via SSL
    • Copy sapcli.pse to /usr/sap/<SID>/DVEBMGS00/sec
      • If sec directory above doesn’t exist, then create it while logged on as <sid>adm
    • Add the following parameter in the DEFAULT.PFL to enable encryption on the DB connection
      • dbs/hdb/connect_property = ENCRYPT=TRUE
    • Stop and start CI.
    • Check dev_w0 and verify connection to DB. Should look something like below.

Loading SQLDBC client runtime ...

C  SQLDBC Module  : /usr/sap/<SID>/hdbclient/libSQLDBCHDB.so

C  SQLDBC Runtime : libSQLDBCHDB 1.00.70.00 Build 0386119-1510

C  SQLDBC client runtime is 1.00.70.00.0386119

C  connect property [ENCRYPT = TRUE]

C

C  Try to connect via secure store (DEFAULT) on connection 0 ...

C

C Sun Jan 12 19:41:31 2014

C  Attach to HDB : 1.00.70.00.386119 (NewDB100_REL)

C  Database release is HDB 1.00.70.00.386119

C  INFO : Database '<SID>/00' instance is running on '<HANA_Host>'

C  INFO : Connect to DB as 'SAP<SID>', connection_id=300100

C  DB max. input host variables  : 32767

 

 

I rant into a few errors on the CI that caused the workservers to crash. I've outlined the errors I saw in the dev_w* traces, the cause and the steps to resolve the errors.

  • Troubleshooting -
    • Error message
      • "Cannot create SSL context" - This message does not provide additional details as the below error messages do. Very generic.
        • Possible Causes
          • sapcrypto library is not accessible
          • PSE key/trust store is not available or not properly filled
        • Solution
          • Ensure sapcrypto is installed correctly and the PSEs are created properly
    • Error message

C SQLERRTEXT : Connection failed (RTE:[300010] Cannot create SSL context: ERROR in SSL_CTX_set_default_pse_by_name:\

C                (4129/0x1021) The PSE does not exist : "/usr/sap/<SID>/DVEBMGS00/sec/sapcli.pse",ERROR in ssl_set_pse\

C               : (4129/0x1021) The PSE does not exist : "/usr/sap/<SID>/DVEBMGS00/sec/sapcli.pse",ERROR in af_open: (\

C               4129/0x1021) The PSE does not exist : "/usr/sap/<SID>/DVEBMGS00/sec/sapcli.pse",ERROR in secsw_open: (\

C               4129/0x1021) The PSE does not exist : "/usr/sap/<SID>/DVEBMGS00/sec/sapcli.pse",ERROR in secsw_open_ps\

        • Solution
          • Verify Sapcli.pse is available in the directory and SIDADM has permissions to it.
    • Error message

SQLERRTEXT : Connection failed (RTE:[300015] SSL certificate validation failed: host name '<hostname>' does not m\

C               atch name in certificate '<DifferentHostname.domain.com')

B  ***LOG BV3=> severe db error -10709    ; work process is stopped [dbsh         1244]

B  ***LOG BY2=> sql error -10709 performing CON [dblink       550]

B  ***LOG BY0=> Connection failed (RTE:[300015] SSL certificate validation failed: host name '<hostname> does not match name in certificate '<DifferentHostname.domain.com') [dblink       550]

M  ***LOG R19=> ThDbConnect, db_connect ( DB-Connect 000256) [thDatabase.c 75]

M  in_ThErrHandle: 1

M  *** ERROR => ThInit: db_connect (step TH_INIT, thRc ERROR-DB-CONNECT_ERROR, action STOP_WP, level 1) [thxxhead.c   2151]

 

        • Cause/Solution
          • Ensure that the CI is using the hostname that exists in the certificate to establish the connection
          • Force the connection to use the hostname specified in the cert by updating the dbs/hdb/connect_property in DEFAULT.PFL
            • Example: dbs/hdb/connect_property = ENCRYPT=TRUE, sslHostNameInCertificate=DifferentHostname.domain.com

The configuration is really simple once figuring it, but I did run into various issues trying to get it to work. Feel free to ask questions in the comment and I'll do my best to answer right away.

Nohup & hdblcm: how to update SAP HANA over an unreliable network

$
0
0

After Stephanie Lewellen introduced hdblcm and hdblcmgui in her blog last month, I decided to test one of my favorite linux tools “nohup” with hdblcm.  Nohup is a lesser known Linux utility that can make the daily life of a system administrator a lot easier.

 

I do like and use GUI’s, but I prefer using command line – and hdblcm has proven to be very useful! When I update any SAP HANA instance I am rarely closer than 1000 miles from the server, whether it is a customer, partner, cloud, or internal system. Sometimes the network is OK, sometimes it is horribly slow and I get disconnected a lot. This depends of course on lots of factors, but none that I have any influence on.  So why do I like hdblcm? First of all I like that it is simple to use, very much like hdbupd was in early versions of SAP HANA. And second, it makes is easier for me to use a linux tool like “nohup” and execute the SAP HANA update without being affected by network disconnects.

 

There are many ways to update SAP HANA, but below procedure is an option to consider for unreliable networks (e.g. geographic location, hotel room, airplane, some public clouds, multiple vpn's) and/or longer running SAP HANA updates (e.g. large scale out with large row store, disk elasticity in public cloud). My goal is that after I start the actual update a network disconnection will not interrupt the SAP HANA update process. To achieve this I have to run hdblcm in a way that it needs no inputs during its execution (batch), which is documented in the latest SAP HANA Server Installation Guide on http://help.sap.com/hana.

 

Note: I use this method only if I do not trust the network to stay connected. If I can trust the network then I simply use hdblcm interactively as documented in the SAP HANA Server Installation Guide.

 

First let’s check the SAP HANA version for my instance P15:

(readability improves when you click on the image)

HDBversion1.jpg

 

As documented in the official guide extract downloaded media in same base directory, in my case /hana/Rev70:

  • SAPCAR -xvf IMC_STUDIO100_70_0-10009662.SAR
  • SAPCAR -xvf IMDB_CLIENT100_70_0-10009663.SAR
  • SAPCAR -xvf IMDB_SERVER100_70_0-10009569.SAR


SAPCAR.jpg

Now I run hdblcm from the SAP_HANA_DATABASE directory to look at the options. Note that it shows the component directories it is reading for the update:

hdblcm_help.jpg

To run hdblcm as a batch process I create a template file:

hdblcm_dumptemplate.jpg

For this simple example I change some values in the template file and save it:

# Select the action to be either installation or update (Default: install)

action=update

# Index

components=client,studio,server

# SAP HANA System ID

sid=P15

# System Administrator Password

password=MySecret123

# Database User (SYSTEM) Password

system_user_password= MySystem123

 

Note:  In the template file you can choose whether to update the saphostagent as part of the SAP HANA update (default = yes). The saphostagent gets updated with the version packaged in the extracted SAP_HANA_DATABASE directory.

Now, as root, I use nohup to execute the update and use & to send the process to the background:

hdblcm_nohup.jpg

For readability:

nohup ./hdblcm --action=update --batch --configfile=template.txt > nohup.log &

 

Once the process is started I can disconnect my putty session, but it is better to do some quick checks. To monitor the process I simply use “tail –f nohup.log”. I always do this right way after executing nohup to ensure there are no immediate errors that I need to fix before I choose to disconnect or forcefully get disconnected.


hdblcm_nohup_exe.jpg

The result in nohup.log:

tail_nohup.jpg

A quick check to verify the updated version:

HDBversion70.jpg

Recommendation: For security either delete the template file, or at least remove the passwords from it.

 

Note that nohup can also be used for other processes. For example I have used it for scp copy and mass start/stops in cloud landscapes for training instances.

 

Example mass start executed as root:

root@host: /hana/Rev70/SAP_HANA_DATABASE # cat start.sh

#/bin/ksh

nohup su – p10adm -c "HDB start" &

nohup su – p11adm -c "HDB start" &

nohup su – p12adm -c "HDB start" &

nohup su – p13adm -c "HDB start" &

nohup su – p14adm -c "HDB start" &

nohup su – p15adm -c "HDB start" &


I hope this was helpful, let me know if you have any comment/questions!

10 tips for SAP River developers

$
0
0

SAP River has been released as of SAP HANA SP7 in December, so I've had my hands on it for over a month now, and written a good pile of code. Thanks to the River team for supporting my... many... questions. I thought I'd give back with some developer tips for River.

 

1) Use sap-river.com (sometimes)

 

Even if you, like me, have an on-premise SAP HANA appliance with SAP River, you may find it useful to use sap-river.com. The reason for this is that sap-river.com has a newer version of the interface and can often pick out errors in your code that HANA Studio may fail to do. Also, you can share your code with other people in the cloud, which can be handy.

 

If you don't have a SAP HANA appliance loaded with SAP River, then sap-river.com is a great way to get started and build some apps!

 

2) Take time to design your schema

 

You will get every minute you take to design your schema back, ten times over, when you come to develop apps. There are a number of key things, but remember that River doesn't support changing elements within entities when they have data - only adding of new entities:

 

- Think carefully about entities and the relationships between them

- Get the key, and required elements in the entities locked down

- Use the correct types, and reduce the length of them where possible (especially strings, which default to length 5000)

 

3) Never duplicate data

 

In the latest app I am building, there are a lot of time-based elements. For instance there is a person, who has a date of birth, an age (in decimal, integer, remainder and age band form). There are another 5 or so entities which also relate to time, and age (in 4 forms) is stored against each of those entities so you can get age at time of transaction.

 

With River, we need to store 1) date of birth and 2) date of transaction - all the other 20-30 copies are entirely redundant. Instead we calculate them on demand by using calculated elements:

 

element Age: Integer = sap.hana.util.dateTime.daysBetween(BirthDate,sap.hana.util.dateTime.currentUTCTimestamp())/365.25;

 

We can do age bands just as easily

 

element ageRange : Integer = select one rangeId from AgeRange where (age >= ageFrom && age < ageTo);


entity AgeRange {

     key rangeId: Integer;

     ageFrom: Integer;

     ageTo: Integer;

}


There is massive simplicity caused by never duplicating data - you never need to worry about whether an element is up to date. And even in my testing with large volumes of records, I get <200ms round trip times from the OData services from Chrome.


4) Use enumerated types to create the equivalent of foreign key restrictions


This is very handy - enumerated types are your friend! If you use them then they will restrict the entry criteria, which is very handy indeed.


element country: CountryCode;


type CountryCode: String(2) enum { US; CA; }


5) You can get the OData documentation in a browser


I don't find that the OData browser works that well in HANA Studio - it works much better in a browser. Happily, you can go directly to it and bookmark your project, using the following syntax:


http://riverserver:80XX/sap/hana/rdl/odata/docs/{riverproject}/{riverproject}


6) Keep an eye on the HANA Index Server Trace file


Sometimes your RDL will misbehave and the syntax checker in HANA Studio won't pick it up. You get this dreaded message:


project:project.rdl

Internal error occurred, please check the logs / contact your Administrator.

When this happens, you can try using the sap-river.com version of the editor, or alternatively you can look at the HANA Index Server Trace. This will often give you a clue as to where the problem is.

I also recommend introducing new concepts slowly - for instance I built 10 views in a row yesterday, which all had the same problem, and troubleshooting the RDL was difficult until I removed 9 of them, corrected one, and duplicated my code 9 times.

7) Python is your friend!


I use Python to test all my RDL code - it is much better than Postman when it comes to the need to test and retest OData, plus you get the added advantage of being able to reload data quickly. I use Python either on my Mac, or when I need better performance, on the HANA server itself.


The HTTP Requests library is an excellent way to connect to River and test your code. It's also a great way to do data loads because you get all the River syntax checking, which you don't get if you load CSVs.


The only downside is that the current implementation of River only allows one record per OData call, which means that a large number of records can take some time to load! For higher-performance loads, I've been testing using HTTP Threads in Python to queue up data, and insert with up to 10 concurrent threads.


8) Use your River project for other stuff too


By default, your River project will have just one file - the RDL that contains all your code. However, I usually reuse the HANA repository workspace for all of my development artifacts - it is a handy dumping ground, to keep all your code in one place!


- Associated SQL for testing purposes

- Python files and associated CSV for loading data

- SAPUI5 artifacts

- Information views (Calculation Views)


9) Use GitHub to share your River repository workspace


This is a tip from Bluefin colleagues Brenton O'Callaghan and DJ Adams. Your River/HANA repository workspace will exist in whatever folder you put it in, and you can handily configure your River repository workspace to also be the location for a GitHub repository. Now you can share your River project with friends and co-developers in the cloud. At Bluefin we also use an on-premise Gitlab instance that Oliver Rogers built, which allows us to store more sensitive code in a safe place.


Final Words


I'm building a substantial and complex app in River in my spare time, and I'm very impressed by the increase in simplicity and reduction of storage cost that River provides. You store information only once, and then describe in clear English how it is represented.


Exposing complex data views is straightforward and you don't need to worry about joins - they are all implicit in your entity association definition. It takes a while to get used to the entity modeling approach of River and I recommend that you take some time to read about Entity Relationship Modeling before you get started.


River was intended to be a reinvention of the programming paradigm and this first version is a move in that direction. With the anticipated full second release of River in Q2 of 2014, I think we will see significant progress, but there is more than enough in the first release to build complex business applications. I suggest you get started now.


Reusing PI objects within HCI

$
0
0

Using SAP HANA Cloud Integration could lead to the migration of existing interfaces from a SAP PI/PO installation to the cloud. There is though no upgrade path or easy and automated way to migrate your interfaces. The only possibility is to import existing Operation Mappings, Message Mappings and Service Definitions into HCI and set-up the configuration for the interface again.

How you can import objects from PI into HCI is described in the configuration guide section 2.3.5.

In this blog I want to share some thoughts with you on this functionality and hopefully you can give your opinion too.

The first step is to create a connection between the SAP PI/PO repository server and HCI. This connection can be defined within the preferences of SAP HANA Cloud Integration. You can define and test your connection from there.

Blog.png

 

Repository Objects can be imported into an Integration Project defined in your Eclipse perspective “Integration Designer”. One of the specific actions for an integration project is to “Import PI Content”. The next step is to select the object category you want select objects of and it will show you a list of all available objects in the repository.

Screenshot 2014-01-07 21.08.34.png

Screenshot 2014-01-07 21.08.55.png

The possibility to select an object does not ensure that the object is really imported. There are constraints on the objects which can be imported(section 2.3.5 from the Developers Guide). These constraints are checked after you have started the import.

If an object is not suitable to import there is a message displayed why the object could not be imported.

Screenshot 2014-01-07 21.22.20.png

 

Despite the constraints, a lot of the mapping and interface objects will be suitable for reuse within HCI. There are though a few thoughts I would like to share with you.

     1) The import of objects must be done for each Integration Project separately. Therefore if you have a great granularity within your Integration Projects you will         have to do a lot if imports.

     2) The constraints of not using function libraries, imported archives and mappings with multi messages of multi-mappings will lead to the conclusion that a lot         of mappings cannot be reused.

     3) The import of an operation mapping leads to an xml-file named like *.opmap. This xml-file contains references to the service definitions used and the                   mapping(XSL or message) between these service definitions. This operation mapping can only be imported from PI/PO and is not an artifact which can be             created within HCI. Besides from typing the xml-file yourself of courseJ

     4) The import of an Operation Mapping results in the import of the underlying objects. The message mapping(s) and the service definitions will be imported and       the operation mapping itself is transformed into an xml-file.

     5) If an object, which you would like to import, is already in your Integration Project it will not be overwritten. So be careful with imports and check if the correct       version of your object is in HCI.

 

I hope this will make the reuse of PI objects within HCI a bit more clear. Please share your thoughts on this subject by commenting to the blog.

 

Best regards,

Fons

SAP HANA Customer Spotlight: University of Kentucky... One Year Later with SAP HANA

$
0
0

Join us for an update on the University of Kentucky's agile, incremental approach to a successful SAP HANA implementation

 

Date: Tuesday January 28, 2014

Time: 11:00 am EST

Speaker: Dr. Vince Kellen, CIO of the University of Kentucky

 

Please join us on January 28, 2014, for the next session of our Webcast series, “SAP HANA Customer Spotlight.”

 

Our guest is Dr. Vince Kellen, chief information officer of the University of Kentucky. Often analytic projects do not proceed as planned, but find out how the University of Kentucky adopted an agile and incremental approach to manage its successful SAP HANA implementation.

 

Dr. Kellen previously joined us for an SAP HANA Spotlight Webinar, and in this session he will provide a one-year update, sharing the rapid growth in the development of the University of Kentucky’s SAP HANA platform. You’ll hear him discuss the following topics:

 

  • Developing a vibrant analytic community of practice to help foster adoption
  • Tightly integrating the University of Kentucky’s student mobile application with SAP HANA to deliver personalized messages and help improve student retention
  • Using SAP HANA to target student micro-surveys and how the very high response rates in micro-surveys are being used to improve understanding student issues
  • Designing SAP HANA components for maximum reuse and rapid analytic object deployments

 

We hope you take this opportunity to explore our customers’ firsthand experiences with SAP HANA. Visit the “SAP HANA Customer Spotlight” series’ home page for a complete listing of upcoming and on-demand events.

 

Register Today!

SAP HANA Customer Spotlight: Unilever Chose SAP HANA. Find Out Why.

$
0
0

Learn how Unilever is empowering its SAP implementation with SAP HANA during our upcoming Webcast

 

Date: Tuesday February 11, 2014

Time: 11:00 am EST

Speaker: Tamara Johnson, Senior IT Manager, Global ERP Center of Excellence - Finance and Innovation

 

Please join us on February 11, 2014, for the next session of our Webcast series, “SAP HANA Customer Spotlight.”

 

Our guest is Tamara Johnson, senior IT manager, global ERP center of excellence – finance and innovation at Unilever, a leading multinational consumer goods company whose products include foods, beverages, cleaning agents, and personal care products. She will discuss how Unilever’s IT implementation of SAP HANA has enabled the financial month end close.

 

Join us to hear:

  • How the business case was built and what the implementation looks like
  • How it has enabled the business to close in one day
  • How Unilever joined the 10K club
  • Technical challenges and lessons learned along the way

 

We hope you take this opportunity to explore our customers’ firsthand experiences with SAP HANA. Visit the “SAP HANA Customer Spotlight” series’ home page for a complete listing of upcoming and on-demand events.

 

Register Today!

Using loops within stored procedures for ETL processes within HANA

$
0
0

Purpose: Demonstrate how to use a looping technique to execute stored procedures in smaller chunks. This assumes no external ETL tools are available and stored procedures must be used.

 

Motivation: Avoid hitting Out of Memory (OOM) errors during processing or make the ETL type operations faster in some cases. I have seen a few SCN posts where people are hitting these OOM errors, and of course have hit them myself so I have figured out how to mitigate this risk. Additionally, I have observed that processing many small data sets vs. one very large one can achieve faster execution times.

 

Real World Scenario: Client needed to build a denormalized dimension that ended up being quite large - say 30 million records. to build this dimension, 5 seperate fairly large tables needed to be joined together from a central "driver" table. These were mostly many to many joins. Attempting to perform the logic in a stored procedure to build the dimension in one shot was consistently failing due to Out of Memory (OOM) errors.


Real World Solution: Process the required data in smaller chunks by calling the same procedure multiple times and looping through at some logical partitioning criteria. Essentially, performing the same logic for every <region, plant, or company code> in your data or driving table. In the example shown below, I have chosen the field LOOPING_DRIVER_TABLE.DATE, but of course this can be whatever makes sense for your requirement.

 

Solution details: I can't take full credit for this as Jody Hesch helped me with some basics, so thanks to him for being generous with his knowledge, this is why SCN community is awesome!

 

To illustrate a generic approach, the following components are required. Again, this is just an example, your mileage may vary according to your requirements. This assumes a full drop and reload of the target table for simplicity - you might find a delta loading approach may make more sense.

 

1. Table with list of values, or the actual transaction/driver table

2. Table that contains the data to be joined (lookup)

4. Target table with transaction/driver data plus the lookup attribute

5. Procedure for performing join/lookup and insert with attribute for looping

6. Wrapper procedure for looping

7. Read/Write procedures must be enabled (check configuration/indexserver/repository/sqlscript_mode = 'unsecure' to enable read/write procedures)

8. User SYS_REPO must have CREATE/DELETE/INSERT/SELECT object privileges to <YOURSCHEMA> grantable

9. Replace <YOURSCHEMA> and <YOURPACKAGE> placeholders with your own schema and package names

 

1. Table with list of values, or the actual transaction/driver table


DROP TABLE "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE";
create column table "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE"    ( "DATE" NVARCHAR (8) not null,     "CUSTOMER" NVARCHAR (10) DEFAULT '' not null,     "MATERIAL" NVARCHAR (18) DEFAULT '' not null,     "SALES_QTY" DECIMAL (15,2) DEFAULT 0 not null,     "SALES_VALUE" DECIMAL (15,2) DEFAULT 0 not null);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140101', '0000112345', '12345678', 10, 100);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140102', '0000112346', '12345678', 20, 190);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140103', '0000112347', '12345678', 11, 180);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140104', '0000112348', '12345678', 15, 175);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140105', '0000112349', '12345678', 1, 100);
INSERT INTO "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" VALUES
('20140106', '0000112351', '12345678', 4, 89);


2. Table that contains the data to be joined (lookup)

 

DROP TABLE "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE";
create column table "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE"    ("CUSTOMER" NVARCHAR (10) DEFAULT '' not null,     "CUSTOMER_SPECIALTY" NVARCHAR (2) DEFAULT '' not null);
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112345', 'LB');
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112346', 'AB');
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112347', 'HS');
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112348', 'DM');
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112349', 'AX');
INSERT INTO "<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" VALUES
('0000112351', 'ZT');


4. Target table with transaction/driver data plus the lookup attribute

 

DROP TABLE "<YOURSCHEMA>"."LOOPING_TARGET_TABLE";
create column table "<YOURSCHEMA>"."LOOPING_TARGET_TABLE"    ( "DATE" NVARCHAR (8) not null,     "CUSTOMER" NVARCHAR (10) DEFAULT '' not null,     "MATERIAL" NVARCHAR (18) DEFAULT '' not null,     "SALES_QTY" DECIMAL (15,2) DEFAULT 0 not null,     "SALES_VALUE" DECIMAL (15,2) DEFAULT 0 not null,     "CUSTOMER_SPECIALTY" NVARCHAR (2) DEFAULT '' not null);


5. Procedure for performing join/lookup and insert


--<YOUR_PACAKGE>.SP_WRITE_TARGET_TABLE
/********* Begin Procedure Script ************/
BEGIN
INSERT INTO "<YOURSCHEMA>"."LOOPING_TARGET_TABLE"
SELECT A."DATE", A."CUSTOMER", A."MATERIAL", A."SALES_QTY", A."SALES_VALUE", B."CUSTOMER_SPECIALTY"
FROM "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE" A
LEFT OUTER JOIN
"<YOURSCHEMA>"."LOOPING_LOOKUP_TABLE" B
ON (A."CUSTOMER" = B."CUSTOMER")
WHERE A."DATE" = :LOOP_DATE;
END;
/********* End Procedure Script ************/


--Test Procedure in isolation

CALL "_SYS_BIC"."<YOUR_PACAKGE>/SP_WRITE_TARGET_TABLE" ('20140101');

 

SELECT * FROM "<YOURSCHEMA>"."LOOPING_TARGET_TABLE";

DATE;CUSTOMER;MATERIAL;SALES_QTY;SALES_VALUE;CUSTOMER_SPECIALTY

20140101;0000112345;12345678;10;100;LB


6. Wrapper procedure for looping

 

--<YOUR_PACAKGE>.SP_LOOP_TEST
/********* Begin Procedure Script ************/
-- scalar variables
i INTEGER;
lv_row_count INTEGER;
lv_current_date NVARCHAR(10);
BEGIN
DELETE FROM "<YOURSCHEMA>"."LOOPING_TARGET_TABLE";
-- Find logical partition looping values
date_list = SELECT DISTINCT "DATE" FROM "<YOURSCHEMA>"."LOOPING_DRIVER_TABLE";
-- store dataset size in row_count variable
SELECT COUNT(*)
INTO lv_row_count
FROM :date_list;
FOR i IN 0 .. :lv_row_count-1 DO        SELECT "DATE"    INTO lv_current_date    FROM :date_list    LIMIT 1 OFFSET :i; -- notice OFFSET indexes at 0, not 1    CALL "_SYS_BIC"."<YOUR_PACAKGE>/SP_WRITE_TARGET_TABLE" (lv_current_date);
END FOR;
--Manually initiate delta merge process
MERGE DELTA OF "<YOURSCHEMA>"."LOOPING_TARGET_TABLE";
END;
/********* End Procedure Script ************/


--Test full procedure and check results

CALL "_SYS_BIC"."<YOUR_PACAKGE>/SP_LOOP_TEST";

 

SELECT * FROM "<YOURSCHEMA>"."LOOPING_TARGET_TABLE";

 

DATE;CUSTOMER;MATERIAL;SALES_QTY;SALES_VALUE;CUSTOMER_SPECIALTY

20140101;0000112345;12345678;10;100;LB

20140102;0000112346;12345678;20;190;AB

20140103;0000112347;12345678;11;180;HS

20140104;0000112348;12345678;15;175;DM

20140105;0000112349;12345678;1;100;AX

20140106;0000112351;12345678;4;89;ZT

 

Happy HANA!

Justin

What's new in SAP HANA SPS 7 - SAP River

$
0
0

The much anticipated SAP River became available with SAP HANA SPS7 and is attracting a huge amount of interest. To help everyone get to up speed the SAP HANA Academy has produced a series of video tutorials that show you SAP River hands-on.

 

There are now 19 video tutorials published totaling almost 3 hours of hands-on content (the HANA Academy is a Powerpoint free zone)! They cover all aspects from installation and setup to RDL specifics to interacting via OData, debugging and completing the user interface with SAPUI5.

 

Many videos include associated code snippets so it's recommended to view them via the links below rather than directly on the YouTube channel.

 

Getting Started

 

Enabling the Environment

Creating the Environment

 

Hello World

Hello World using OData

 

Setting up an Experience SAP River sandbox

 

Entities, Associations, Actions, and Views

Access Control

 

Creating Applications

 

Generating Test Data

Using Data Preview

Using ODATA calls

 

Modifying applications

 

Debugging with Data Preview

Debugging without Data Preview

Viewing the Trace Logs

 

Deleting the application

 

Creating the web-based UI

Creating the UI using SAPUI5

 

You can always find the latest, greatest index of SAP River tutorials on the SAP HANA Academy site here: www.saphana.com/community/hana-academy/#SAPRiver

 

Want to know more about the SAP HANA Academy? Visit us at academy.saphana.com or follow us on Twitter @saphanaacademy.

SuccessFactors Adapter in SAP HANA Cloud Integration (SAP HCI)

$
0
0

Overview

 

With more customers moving towards a cloud based IT investment strategy for their HCM solution the need to integrate with their existing OnPremise setup and other 3rd party systems is on the rise. Large companies generally move towards a Cloud HCM investment like SuccessFactors in a phased manner. The phased approach is generally in two dimensions the first one being the solution dimension where only certain processes are first moved to the cloud example Performance Management or Compensation Management or Recruiting and later on the other core processes of the company follow. The second dimension is location wise where HCM Business processes in select set of locations are moved first before the rest of the larger regions follow.

 

What this results in; is the requirement to keep all of the systems in sync and ensure the processes cross interacts in a smooth manner. A few examples of this setup that leads to an integration requirement in SuccessFactors could be the follows:

 

  • Company manages its Core Employee Management in SuccessFactors (Employee Central) whereas payroll is managed using the On-Premise SAP ERP HCM Payroll.
  • Company manages its Employee Management in On-Premise using the On-Premise SAP ERP HCM system whereas Performance management is done using the SuccessFactors BizX suite.
  • Company does Recruitment using the SuccessFactors BizX suite whereas core employee management is done using On-Premise SAP ERP HCM.
  • Company manages its Core Employee Management in SuccessFactors (Employee Central) whereas Travel management and Financials is On-Premise SAP ERP FIN.
  • Company manages its Core Employee Management in SuccessFactors (Employee Central) whereas Benefits management is provided by 3rd party vendors.
  • Company does Recruitment using the SuccessFactors BizX suite whereas assessment of candidates is done by 3rd party vendors.

 

SAP HANA Cloud Integration (SAP HCI) is SAP’s strategic integration platform for SAP Cloud Customers. It provides out-of-the-box connectivity across cloud and on-premise solutions. Starting January 2014, you have the possibility to connect to the SuccessFactors Cloud system using the SuccessFactors Application Adapter. For integration developers who have used HCI to connect to SuccessFactors systems earlier what this means is that you no longer have to worry about the Login/Logout or API semantics of SuccessFactors as all of this is now taken care of by the Adapter internally.

 

Connecting to a SuccessFactors system is now as easy as providing the system credentials, choosing the entity and deciding on the process flow and mapping.

 

Pre-Requisites

 

  1. SAP SCN User id/password using http://scn.sap.com
  2. SAP HCI Account with User and Roles (Raise a CSS ticket in component XX-INT-CLD-HCI-PI)
  3. Installation of SAP HANA Cloud Integration Eclipse tooling
  4. Ensure SaaS Admin has deployed the required SuccessFactors public certificate keys in the system.jks file of the HCI Tenant that you are using to connect to the SuccessFactors system

 

If you would like to get an early access to using SAP HCI via the Trial Landscape visit the blog SAP HANA Cloud Integration - Test and learn more about SAP’s cloud based integration solution to know more. You can use the SuccessFactors Adapter even via the trail landscape.

 

Capabilities of the SuccessFactors Adapter

 

  1. Login, Logout, Session Handling  - The Integration developer need not create separate flow steps and manage the Login/Logout calls. Adapter internally handles Login, SessionID management and Logout when connecting to the SuccessFactors system.
  2. SFQL Modeling support – With the Operations Modeler a step by step guided UI is available using which the Successfactors Query Language (SFQL) can be modeled. The Operations Modeler ensures that the SFQL generated is as per the semantics required by SuccessFactors system. The Modeler ensures the user is presented with the required fields and operations based on the Entity being selected. Example in case of Insert operation the required fields are automatically populated; In case of Query only queryable fields are shown; the Modeler shows up only the supported operations for an Entity, example in case an Entity does not support Upsert this is not shown up in the Modeler. The Operation Modeler ensures that the Integration developer only specifies the correct SFQL.
  3. Query, Insert, Upsert, Update – The Adapter supports all the standard operations of SFAPI.
  4. Simple, Compound API support – With the Operations Modeler one can configure both the Simple as well as Compound SuccessFactors API. The Adapter allows calling of both these types of API.
  5. Simple API’s are SuccessFactors Entities with a flat field structure. Compound API’s are Entities with a nested structure. The Compound API Entity in SuccessFactors is the CompoundEmployee Entity. More information on SuccessFactors API is available in the blog Employee Central Entities and Interfacing
  6. Auto XSD generation for mapping – When SFQL is modeled using the Operations Modeler the XSD (data format in which the messages are exchanged with the SuccessFactors system) is automatically generated which can be used in Message Mapping.
  7. Scheduler to poll at specified intervals – In case the integration scenario has the requirement of polling the SuccessFactors system at regular intervals the Adaptre can be used as a Sender channel in which case a Scheduler is available. The scheduler allows a lot of easy settings in the UI [Example: Schedule on the 3rd of every month, schedule every day at hh:mm hours, schedule everyday between xx-yy hours, schedule on mm-dd-yyy between xx-yy hours]
  8. Delta sync to fetch only changed records – There is no need to explicitly create logic to fetch changed records after a previous run. This is specifically required when SuccessFactors is used in polling scenarios and it is required to fetch only changed records in subsequent runs. To enable this Delta Sync option has to be included as a parameter on the query. The Operations Modeler enables inclusion of this parameter in an easy manner.
  9. Dynamically pass values to SFQL Query – It is possible to dynamically pass values to the SFQL based on the input payload. Example if you have an existing message payload in your message bus and you would like to use this in your SFQL query filter condition you could do so by specifying the XPATH from where the content can be read. The Operations Modeler enables inclusion of this parameter in an easy manner.
  10. Multiple SFSF calls and merge payload –In case you have a requirement to make separate calls to different Entities in the SuccessFactors system and merge the payload you can use the SuccessFactors Adapter with the Content Enricher step. By using the multi-mapping (Multiple source XSD to one destination XSD) capability you can merge the payload to a single structure if required.

 

The support for oDATA Entities that was released in SuccessFactors release 1311 and Ad-Hoc Query support is under development in HCI at the time of writing of this document (Jan 2014). You can expect this functionality soon.

 

Configuring the SuccessFactors Adapter in iFlow

 

When to use the SuccessFactors Adapter as a Sender Channel and Receiver Channel

 

Use the SuccessFactors Adapter as a Sender Channel when you would like to poll the SuccessFactors system at regular intervals and Query data from the system. When using the SuccessFactors Adapter as a Sender channel only Query operation is available and you also have an option to configure the schedules during which data has to fetched from the SuccessFactors system.

 

Use the SuceessFactors Adapter as a Receiver channel when you would like to query or update data into the SuccessFactors system not at a specified intervals but rather on completion of other integration flow steps in the iFlow.

 

Deploying the Credentials using which you would connect to the SuccessFactors system

 

  • Go to the Node Explorer View and Right Click on the Tenant Management Node to choose ‘Deploy Artifacts’

1. Deploy.png

  • Choose Basic Authentication

 

Basic Auth.png

 

  • Specific the Credential details and SuccessFactors system details and Click on Finish

 

3.sfsf cred.png4.deploy finish.png

 

Creating your first SuccessFactors Integration Project

 

In this Example you would connect to a SuccessFactors system and fetch the CompoundEmployee details

 

STEP 1: Create your project

 

  • Go to your Eclipse environment and choose File -> New -> Integration project

cp1.png

 

  • Specify the Project Name and iFLow Name. You can choose a Predefined pattern. In this case since it is simple project we shall choose the Point-to-Point Channel

 

STEP 2: Select the SuccessFactors Adapter in Sender Channel

 

  • Double Click on the Sender Channel Configuration and Choose the SuccessFactors Adapter

 

sfsf1.png

 

sfsf2.png

 

STEP 3: Configure the SucccessFactors Adapter – Connection Tab

 

sfsf3.png

 

  1. Specify the Credential Name that you deployed in the earlier step using which you would like to connect to the SuccessFactors system
  2. Specify the Proxy Host and Proxy Host using which it can communicate to the SuccessFactors system. Currently when connecting to SuccessFactors sytem via the HCI Demo landscape you could specify this as proxy and 8080

 

STEP 4: Configure the SucccessFactors Adapter – Processing Tab

 

sfsf4.png

  1. Call Type currently is defaulted to Synchronous (as of Jan 2014). This will be enhanced with Ad-Hoc option when the capability is made available in the SuccessFactors Adapter.
  2. Operation Name consists of Entity, Operation and Query/Fields. You can configure this using the Operations Modeler. This should confirm to the format in which the SuccessFactors system can execute the required Entity.
    • Entity: The SuccessFactors Entity that you would be using to transfer data. In this Example this is CompoundEmployee
    • Operation: Query, Insert, Update or Upsert. In this example this would be Query
    • Query/Fields: The SuccessFactors Query Language (SFQL) you would pass to query/insert/update/upsert the required fields along with required conditions. In this Example this can be SELECT person, email_information FROM CompoundEmployee WHERE job_code = 'ADMIN-1'

 

Operations Modeler

 

The Operations Modeler gives an intuitive UI for user to model the Operations based on the SuccessFactors Query Language (SFQL)

 

  • To open the Operations Modeler Wizard click on the Operations Modeler button in the Processing Tab. Specify the Address, Company ID, Username and Password using which you would like to connect to the system. This can also be a SuccessFactors Test System that you are using to Test your iFlow

sfsf5.png

 

  • Clicking on Next will connect to the specified SuccessFactors system and list the Entities that you can use. For this Example you can choose the CompoundEmployee Entity

 

sfsf6.png

 

  • Select the Operation and Fields of the Entity

 

sfsf7.png

 

When using the SuccessFactors Adapter as a Sender Channel as in this example only the Query option is available. This is because you would be using the SuccessFactors Adpater in the Sender Channel when you would like to query data at a specified time or regular intervals. In all the other cases you would want to use the SuccessFactors Adapter as a Receiver channel.

 

All supported options for a selected Entity is available when using the SuccessFactors Adapter as a Receiver Channel. Only those Operations and fields are shown in the Operations Modeler that are exposed by the SuccessFactors system as returned by the SuccessFactors Describe API.

 

Also Note when using the Adapter for Operations like Insert, Update or Upsert certain fields are prefilled which are mandatory for the API to work.

 

  • Specify the filter condition using which you would like to restrict the Query Result. This option is available only in case of Query Operation.

sfsf8.png

Note:

You have an option to specify the value as an Xpath. This would be useful when using the SuccessFactors Adapter as a Receiver Channel and you would like the value in the Query to be extracted from the payload available in the message bus.

 

You can also specify a DateTime filed like last_modified_on as a Delta Sync operation in which case if the iFlow is scheduled to run at regular intervals only the changed records are fetched in subsequent runs.

 

  • Configuring Sorting Criteria. This option is available only in case of Query Operation.

 

In case the selected entity allows sorting on certain fields as specified in the Describe API then you would get a filed list using which you can sort. In the case of the Entity in this example CompoundEmployee you may not have any sortable fields

 

  • Click on Finish

 

An XSD is generated in the Project package src.main.resources.wsdl automatically. In case of an existing file you also an option to overwrite it.

The XSD file would be useful when you would like to map your payload to the data in the required format using the mapping step. In this simplistic example this step has been avoided.

 

In case of the Query operation you can use the XSD file on the LHS (Left Hand Side) of the mapping editor and transform it to the required format as required on the RHS (Right Hand Side)

 

In case of the Insert, Upsert, Update operation you can use the XSD file on the RHS (Right Hand Side) of the mapping editor and transform the data that you receive on the LHS (Left Hand Side) before passing it to the SuccessFactors system.

 

STEP 5: Configure the SucccessFactors Adapter – Scheduler Tab

 

The Scheduler Tab is available only when using the SuccessFactors Adapter as a Sender Channel. You can use the Scheduler Tab to schedule the triggering of the iFLow at specified intervals.

 

sfsf9.png

 

In the above example the Scheduler is triggered every day at 1:55 PM Indian Standard Time (IST)

 

STEP 6: Configure the Receiver Channel

 

sftp1.png

For this specific example the easiest way to see your paylod output would be to configure an SFTP server as a Receiver Channel. With this you can see the queried payload in your SFTP server.

  1. Select the Receiver channel as SFTP
  2. Configure the SFTP Adapter settings based on the SFTP server provisioned to you by the SaaS Admin

 

sftp2.png

 

STEP 7: Deploy the iFlow

 

  • Save your iFlow and Right click on the Project Folder to trigger ‘Deploy Integration Content’
  • Specify the Tenant to which you would like the iFLow to be deployed

 

deploy1.png

 

  • Click on the Worker Node in your Node Explorer. The Component Status View of the Node will reflect the status of deployment

 

deploy2.png

 

Note: You can also see the status of the iFLow deployment by going to the Deployed Artifacts View of your Tenant Management Node in Node explorer

 

deploy3.png

 

STEP 8: Check Monitoring and Logs to know the status of execution

 

  • Click on the Tenant Management Node of the Node explorer to bring up the Monitoring View

 

deploy4.png

  • Double click on the Message to see further details and Logs in the Properties View

 

deploy5.png

 

With this you have completed creating and deploying a simple Integration Project that connects to a SuccessFactors system. HAPPY INTEGRATING !!!

 

Additional Information

 

Steps to transport HANA content from source to target system using LCM

$
0
0

Hello All,

 

SAP HANA Application Lifecycle Manager is a stand-alone tool that can be utilized to transport content from Source system to target system.

There are three ways to move objects from one HANA system to another (manually, using LM or CTS+ using Solman), this documentation only describes  transportation of content from source to target system in regards to Life cycle Management.

 

To do tasks:

  • Create a package and assign it to a delivery unit in the source SAP HANA system.
  • Create a transport route in the target HANA system
  • In the source SAP HANA system, create an analytic  view  under the  package
  • Transport the delivery unit (which includes the analytic view as content) from the source SAP HANA system to the target SAP HANA system.
  • Check your transported analytic view in the target system.

 

 

Create a package in the source system and design an analytic view

1.png

 

Once your analytic view is ready create a delivery unit and assign this package to it. To create a delivery unit click on the delivery unit name on the Quick launch screen

 

2.png

3.png

 

Now choose create delivery unit button and fill up all the required details in the below screen

 

4.png

Once  a delivery unit is created assign the above created package to this delivery unit by selecting ADD

 

5.png

Logon to HANA Lifecycle Management through XI:

http://hanadgt00:8000/sap/hana/xs/lm/

 

6.png

 

Click on Transport tab  and create a source system by clicking on register button

7.png

 

Fill all the required information such as hostname, XS Engine port

 

8.png

Click Next

 

9.png

Now choose ’Maintain Destination’:

This will bring you to the XS Application admin screen (if the screen on the left is empty, you leak authorization).


10.png

Once you are done with maintaining the credentials in the XS admin screen,  go back to the Transport/System screen and finish the configuration.

When you select the SID and do a ping now, below should be your result:


11.png

 

12.png

Now you can configure a Transport route:

 

13.png

Once you select a Source system, you will get all available Delivery units from this system.

You have the option to set this route up as default FULL/DELTA or without default (without default will prompt during the transport creation)

 

14.png

Once you completed this setup, you can run transports for objects under this delivery unit under Transport – Transports by clicking the Start Transport Button:

15.png

You can also monitor the status in the Home screen:

16.png

 

Finally you end up with a message – “Transportation Successful”

 

Hope you have enjoyed this content , request you to provide your valuable comments.

 

Regards,

Sharath Borra


What's new in SAP HANA SPS 7 - Monitoring

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 7. To get the best overview of what’s new in SAP HANA SPS 7, see Ingo Brenckmann's blog on the SAP HANA community site. The delta presentation comparing changes since SPS 6 by SAP HANA Product Management can be found on the SAP HANA web site.

 

The topic of this post is monitoring and complements a number of tutorial videos posted to the SAP HANA Academy site and on YouTube:


 

Other blogs on the What's New topic

 

 

What's New with SPS 7?

 

A "dazzling highlight",  in the words of Lars Breddemann are the new graphical editors: Memory Overview (shown below), Resource Allocation and Memory Allocation Statistics. The editors also caught the attention of John Appleby in his blog on monitoring.

memory.png

SAP HANA Studio - Memory Overview

 

 

Although you can add as many as 18 memory-related statistics to a table viewer in HANA studio, this is not the most intuitive as you can see below.

services.png

Memory statistics in SAP HANA Studio, Administration Console, Landscape, Services

 

To get any graphical display of resource usage over time, before SPS 7, you could use the administration console, performance - load tab. This viewer has not changed much (if any) from the BWA admin tool, which is still included, slightly modified, with SAP HANA (command HDB admin). You need good eyesight, good glasses or a big screen to view clearly as color and line style cannot be modified.

 

admin.pngload.png

Load graph from the SAP HANA database administration tool and SAP HANA studio.

 

Hence, the new web-based graphical editors Memory Overview, Resource Allocation and Memory Allocation Statistics are a welcome enhancement as they provide a much clearer and visually attractive display of monitoring data. They all use the SAP HANA XS built-in application server. You can view the new editors in action in this overview video on what's new with monitoring for SAP HANA SPS 7.

 

 

Monitoring SAP HANA: What's New in SPS 7

 

New Statistics Server

SPS 7 also introduces a new implementation for the statistics server. This statistics server is the component of the SAP HANA database that continuously collects data about system status, performance, and resource usage. It also issues alerts in the case of problems. As of SPS 7, it is possible to switch to a new mechanism whereby data collection and alerting are implemented through the execution of SQLScript procedures. This has two advantages.

  1. Improved performance - reduction of disk I/O and memory usage by replacing the OS process with internal procedure calls
  2. Increased configuration flexibility - statisticsserver.ini properties file is replaced by tables in the _SYS_STATISTICS schema, which allows for adhoc enable/disable and scheduling of collection

 

The following video discussed the new statistics server and how to enable it, as it is not active by default.

 

 

Monitoring SAP HANA: New Statistics Server

 

Performance Analysis Guide

New in the documentation set of SAP HANA SPS 7 is the Performance Analysis Guide, which focusses on performance analysis and monitoring. This guide describes what you can do to identify and resolve performance issues and how to enhance the performance of the SAP HANA database in general. Regarding monitoring, the guide describes the different tabs in the SAP HANA studio administration console under performance, like threads, sessions, blocked transactions, job progress and load monitoring.

 

Other documentation on monitoring can be found in the Administration Guide and the Technical Operations Manual. The video below provides a short overview.

 

 

Monitoring SAP HANA: Documentation

 

Monitoring Tools Overview

 

In case you are new to the subject of SAP HANA monitoring - or are preparing yourself for the SAP Certified Technology Associate exam - you may find this overview video helpful about the different tools you can use to monitor SAP HANA.

 

 

Monitoring SAP HANA: Tools Overview

 

System Views

 

More advanced is the topic of system views. SAP HANA contains hundreds of monitoring views in the _SYS._STATISTICS schema that you can use to write your own monitoring scripts in SQL. Getting started on this topic can be a bit of a challenge. However, there is a shortcut you could take. When you navigate through SAP HANA studio, going from tab to tab, enabling or disabling settings for the HANA database, what is actually being sent over the line is just plain SQL. In the video below, we will show you how you can capture the statements using SQL trace and how they can be used for custom monitoring scripts.

 

The resulting SQL script is attached to this SAP HANA site document.

 

 

Monitoring SAP HANA: System Views

 

Monitoring Role and Security

 

Both the administration and the installation guide clearly document not to use the SYSTEM user in production system.

 

caution.png

However, less clearly documented is how to setup a monitoring role that you can use to grant to database administrators. What about the built-in MONITORING role, you might say? That's a good start, it is true. It grants GRANT CATALOG READ, so you can use the administration console perspective in SAP HANA studio, and also grants SELECT on the _SYS_STATISTICS schema, so you can view data in System Monitor and the different tabs of the Administration editor. However, with only the MONITORING role granted, you will get an error when trying to open the new web-based graphical editors. What privileges are needed to handle a Disk Full Alert? Restart a service? Cleanup traces? This and more is explained in the video below:

 

 

Monitoring SAP HANA: Monitoring Role

 

Monitoring Checklist

 

The SAP HANA Administration Guide includes a helpful monitoring checklist. In the last video on monitoring SAP HANA, we show you how to check system availability, how to handle alerts, monitor memory, cpu and disk resource usage and much more.

 

As with the video on monitoring tools above, you might find this video tutorial helpful when you are preparing for the SAP Certified Technology Associate exam. Details on this certification you can find in the SAP Education and Certification Shop.

 

 

Monitoring SAP HANA: Checklist

 

To be continued...

You can view more free online videos and hands-on use cases to help you answer the what, how and why questions about SAP HANA and Analytics on the SAP HANA Academy at academy.saphana.com or follow us on Twitter @saphanaacademy.


What's new in SAP HANA SPS 7 - SAP River

$
0
0

The much anticipated SAP River became available with SAP HANA SPS7 and is attracting a huge amount of interest. To help everyone get to up speed the SAP HANA Academy has produced a series of video tutorials that show you SAP River hands-on.

 

There are now 19 video tutorials published totaling almost 3 hours of hands-on content (the HANA Academy is a Powerpoint free zone)! They cover all aspects from installation and setup to RDL specifics to interacting via OData, debugging and completing the user interface with SAPUI5.

 

Many videos include associated code snippets so it's recommended to view them via the links below rather than directly on the YouTube channel.

 

Getting Started

 

Enabling the Environment

Creating the Environment

 

Hello World

Hello World using OData

 

Setting up an Experience SAP River sandbox

 

Entities, Associations, Actions, and Views

Access Control

 

Creating Applications

 

Generating Test Data

Using Data Preview

Using ODATA calls

 

Modifying applications

 

Debugging with Data Preview

Debugging without Data Preview

Viewing the Trace Logs

 

Deleting the application

 

Creating the web-based UI

Creating the UI using SAPUI5

 

You can always find the latest, greatest index of SAP River tutorials on the SAP HANA Academy site here: www.saphana.com/community/hana-academy/#SAPRiver

 

Want to know more about the SAP HANA Academy? Visit us at academy.saphana.com or follow us on Twitter @saphanaacademy.

Innovation vs Keeping the Lights On

$
0
0

It all started with a Twitter from John Appleby

Screen Shot 2014-01-29 at 10.57.50.png

Which got responded by Steve Rumsby

Screen Shot 2014-01-29 at 10.59.00.png

I did not want to miss out on such a very good conversation and added my 5 cents of thought to it. I fully second the statement of John since it is really what makes most sense. Yet on the other side I have to agree with Steve that indeed "it isn't that simple". This shows that there is a very delicate line for many IT Managers and CIO between driving & adopting innovation and doing what is expected of them, keeping the lights on.

 

Keeping the lights on an maintaining SLA is probably not something that will drive big pay raises or promotions, yet failing them could be lever to loose the job, which clearly nobody wants.

 

John Reed has posted on posted a very interesting Blog CIOs in 2014 - disruptors or disrupted? He refers to the CIO study which has been conducted by IDC. There was one slide of the study that stuck me most and it was the 68% of the CIO state that they have difficulties to balance between operational excellence and innovation. There is clearly much more to the story than who has the better technology. It is not always the pure logic which drives decisions, after all we are all humans and have our very own agenda, needs, fears, POV etc.

 

The Twitter threat between John, Steve and then myself expanded and we got to some hard numbers.

Screen Shot 2014-01-29 at 11.14.48.png

There is little to argue about this point but the fact, which was been lined out by Steve, that most customers have already an existing infrastructure and the shift to the cloud is not perceived as being that simple. There are many questions which a customer needs to ask himself and he should be well advised on this as well. A customer has typically a large variety of systems in place which need to be running either on-premise or they could be in the cloud.

 

The great advantage of cloud besides its costs is really the simplicity. You can start with a minimum and scale based o your needs. There has to be always a first step and HANA is a great opportunity to take this first step.

 

Some IT Managers have told me that they view cloud with a certain scepticism since they feel it takes their power away. For many (probably the vast majority of the 68% mentioned earlier) the fact of being able to say I have XXX number of Servers and YYY number of people is still a status symbol on how important they are. Fact is that if they do not adapt they are doomed to facing difficult times. LOB will not wait for them to take a decision and provide a solution, they will simply move to the cloud which would then really be a risk.

 

As people who either sell or advise (or both) it is important that we keep in mind not only what are all the advantages of a solution we want to sell but also what could be the objections and the why of them in order to be successful. Empathy remains important in the time of real-time & cloud 

SAP River: Getting Started

$
0
0

Want to:

 

Natively develop entire backend applications on SAP HANA?

 

Use one language to design data models, data constraints, business logic and role based access control and authorization?

 

See how to download all the components you need to set up the SAP River development environment:


SAP River: Getting Started


View other tutorials on SAP River at the SAP HANA Academy.


SAP HANA Academy - over 500 free tutorial technical videos on using SAP HANA.

Create your own application site using SAP HANA UI Toolkit

$
0
0

Create your own application site using SAP HANA UI Toolkit

 

It’s easier now to start creating your own site using the HANA UI toolkit since HANA SP7 release, embedded with sample data, and service kit as being part of the HANA Installer.

In this blog I would like introduce the changes in regard to import of Delivery Units and few other steps that are changed as part of HANA SP7 release

Thanks to Lucas for briefing out the introduction to SAP HANA UI Toolkit for Information Access (earlier) in his blog.

I would like to provide further information’s regarding the latest changes until HANA SP7 release

Prerequisite: HANA appliance software is installed on your server.

Necessary role/User for the system access

  • access to the OS level via console, using the <sid>adm user of your HANA
  • access to the DB of your HANA via DB user

 

Install the UI Toolkit

 

Download Delivery Unit

Download the Delivery Unit from SMP (service market place):  https://websmp103.sap-ag.de/support

 

Note: Since SP6 onwards , the UI toolkit is available as part of SMP (SAP Service market place) . The service delivery Unit (HCO_INA_SERVICE.tgz) is part of the HANA installer.

Path to view the service DU within HANA installer: SYS/global/hdb/content

Import Delivery Unit:

Launch HANA Studio  

Select your HANA instance

On the Quick launch page, choose Content -> Import

Now Select HANA Content -> Delivery unit.

Choose Next

Select the server, browse the Service DU (Service DU on server: SYS/global/hdb/content): HCO_INA_SERVICE.tgz

Similarly select the client, browse the UI tool kit DU (downloaded from SMP to your client) : HCO_INA_UITOOLKIT.tgz

 

1.jpg

 

SP6: Sample data (EPM: Enterprise Procurement model) is part of <SYSTEM> schema 

SP7: EPM Table content can be found in <INA_EPM_DEMO >schema and EPM Column views can be found in “SYS_BIC “schema 

2.jpg

3.jpg

In the SAP HANA modeller, in the SAP HANA Systems view, under Content, check that the following packages

are available:

sap\bc\ina\demos

sap\bc\ina\uitoolkit

sap\bc\ina\api

sap\bc\ina\uicontrols

4.jpg

 

INA role and authorizations to HANA userssince SP7

The sap.bc.ina.service.v2.userRole:: INA_USER role grants access to the info access HTTP service and enables the processing of metadata retrievals.

The above role can be assigned to a user in different ways. Role can either directly assigned to the user OR reference it in another role

System user:

Role required: sap.bc.ina.service.v2.userRole::INA_USER

Self-defined user

Role: sap.bc.ina.service.v2.userRole::INA_USER

For non-system user assign the INA role.

Assign the select privilege on the attribute view and the underlying tables.

Assign select and execute privileges on _SYS_BIC schema.

Add the Analytical privilege “_SYS_BI_CP_ALL” to the user if it is not present.


Prepare your Source Data.

Enabling Search Options in HANA Studio and Register and Activating the Service are no more relevant .It’s available by default

User should have a knowledge of what attributes he wants to enable for free style search , which records he wants to enable for users , what attributes he wants to see a count of distinct values or add filter .

Go ahead and create Attribute view via studio

5.jpg

Select an attribute in your model and observe that there is tab to enable search attributes.

Define freestyle attributes,( apply full text indexes and set up text analysis if required else as a first step just try the enable one or two attribute as freestyle search )

 

(For more details , please refer section 9.4,   http://help.sap.com/hana/SAP_HANA_Developer_Guide_en.pdf)

 

Set Up Dev environment:

Before you create your own site, it is required to set up Dev environment .With the help of regi , one can modify the Hana repository files.

 

  1.   Ensure SAP HANA™ Client is installed on your machine.
  2.   Create local Workspace:

   - Open Command prompt and run the following commands:

                        Hdbuserstore set <key name> <host name>:3<sys no>15 <USERNAME> <PWD>

Regi create workspace <workspace name>  --key=<key name>Change your directory to the newly created local workspace directory.

Regi track sap.bc.ina.demos 

(to map ‘sap.bc.ina.demos’ repository package to your workspace)

Regi checkout

                 (Writes all the repository objects to your local workspace)

   3.     Develop your own html pages and Save them.

             (You can also create a copy of any of the existing UIs and edit.   

   4.     Commit the changes back to HANA repository.

Regi commit

  (You should be able to see your UI inside ‘sap>bc>ina>demos’ now)

    5.     Activate all changes in your UI.

  Regi activate

     6.     For more information on Regi, refer to the following wiki link:

http://trexweb.wdf.sap.corp:1080/wiki/index.php/Regi


Creating your own site:

Please refer to the previous blog 

http://scn.sap.com/people/lucas.sparvieri/blog/2012/08/13/how-to-create-your-own-web-application-using-sap-hana-ui-toolkit-for-information-access

 

Launch the URL:

http://<HANAhost>:80<instance>/sap/bc/ina/demos/myapp/search.html

 

 

6.jpg

 

Thanks ,

Reena S

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>