Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

Modeling CM/YTM/LYTM Comparison in Calculation Views using Input Parameters derived from Scalar Procedure

$
0
0

The motivation for this exercise is based on the approach Uladzislau Pralat  presented here.

Other approaches are also presented by Justin Molenaur , Ravindra Channe& Guneet Wadhwaand can be referred following the below links:

http://scn.sap.com/community/hana-in-memory/blog/2013/07/26/implementation-of-wtd-mtd-ytd-in-hana-using-script-based-calc-view-calling-graphical-calc-view

http://scn.sap.com/community/hana-in-memory/blog/2014/03/10/how-tocalculate-ytd-mtd-anytd-using-date-dimensions

http://scn.sap.com/community/hana-in-memory/blog/2015/01/09/simple-example-of-year-to-date-ytd-calculation-in-sap-hana

 

The difference in the approach, I am discussing here, lies in leveraging the Input Parameter’s “Derived from Procedure/Scalar Function” option to deduce the Year_To_Month (CYTM), Last_Year_Current_Month (LYCM) and Last_Year_To_Month (LYTM) values as input parameters and can further be applied as Filters. I am using 'Procedures returning scalar values' to compute the month. This is similar to the approach we adopt in BW/BEx reports where the CYTM,LYTM variables are populated using CMOD exit.

 

The demo here is based SAP HANA SP10.

 

Approach:

  • Create a base “reusable” calculation view having sales table and M_TIME_DIMENSION table joined on the required date field.
  • Create one base input parameter which accepts user entered Calendar Month. This is used further as an input for the scalar procedures to calculate CYTM, LYCM, LYCM.
  • Create a calculation view (may treat it as a Reporting View) having the above created base view and apply the input parameters as filters on CALMONTH field.

 

Development:

  • Create a base Calculation View CA_SUPERSTORE_SALES_REUSE.

1.png

  • Reuses this view in another Calculation view and use Union on all the four nodes corresponding to CM, CYTM, LYCM, LYTM Projection nodes.

2.png

Union Node Mapping:

12A.png

  • Observe the filters on each projection node corresponding to the four period categories getting calculated under the union node.
    • CM : is straight forward, CALMONTH filtered on the User direct input calendar month input parameter.

3.png

    • CYTM : CALMONTH is filtered based on V_CYTM_FROM input parameter derived using the scalar procedure taking V_CM as the input;

4A.png

V_CYTM_FROM input parameter definition and its mapping:

5.png

6.png

The code snippet evaluating CYTM_FROM is:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_CYTM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTHVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_CALMONTH := CONCAT(:V_YEAR,'01');

  SELECTV_CALMONTHINTOOUT_CALMONTHFROMDUMMY;

END;

 

    • LYCM: Filter details

7A.png

Input Parameter definition and mapping details:

8A.png

Code snippet for LYCM is:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYCM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTHVARCHAR(6);

  DECLAREV_MONTHVARCHAR(2);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_MONTH := RIGHT(IN_CALMONTH,2); 

  V_CALMONTH := CONCAT(:V_YEAR,:V_MONTH);

  SELECTV_CALMONTHINTOOUT_CALMONTHFROMDUMMY;

END;

 

    • LYTM:

9.png

Input Parameter definition and mapping details: First screen shot gives the 'From' value corresponding to Last Year and the second one gives the 'To' value

10.png

11.png

Code snippet for LYTM_FROM:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYTM_FROM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTH_FROMVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_CALMONTH_FROM := CONCAT(:V_YEAR,'01');

  SELECTV_CALMONTH_FROMINTOOUT_CALMONTHFROMDUMMY;

END;

Code Snippet for LYTM_TO:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYTM_TO_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTH_TOVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  DECLAREV_MONTHVARCHAR(2);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_MONTH := RIGHT(IN_CALMONTH,2);

  V_CALMONTH_TO := CONCAT(:V_YEAR,:V_MONTH);

  SELECTV_CALMONTH_TOINTOOUT_CALMONTHFROMDUMMY;

END;

 

  • And finally, the output:

13.png

  • The PlanViz: looking at the plan visualization, we can observe various filters in action

14.png

Times lines are here:

14A.png

I would like to thank deepak hpfor providing me tip to fix the procedure issue; you may refer here for details;

 

 

- Prasad A V


SAP HANA database interactive terminal (hdbsql) - by the SAP HANA Academy

$
0
0

Introduction

 

At the SAP HANA Academy we are currently updating our tutorial videos on the topic of SAP HANA administration for the latest support package stack SPS 11. You can find the full playlist here: SAP HANA Administration - YouTube

 

On of the topics that we have added is the SAP HANA database interactive terminal or hdbsql as it is mostly known. You can also watch them in a dedicated playlist: and SAP HANA database interactive terminal - YouTube

 

Overview

 

Hdbsql is a command line tool for executing commands on SAP HANA databases. No Fiori, no cloud, not even colourful Windows, just plain old terminal style command line. It is included with each server installation and also with each SAP HANA client, which is its strongest asset: it is always there for you to rely on.

 

It is called the database interactive terminal but you can also execute commands non-interactively, as a script that it is. It is probably even the most common use case.

 

The tool is documented on the SAP Help Portal in the last chapter of the Administration Guide: SAP HANA HDBSQL (Command Line Reference) - SAP HANA Administration Guide - SAP Library. The chapter is only a handful of pages and serves as a reference. This means, for example, that you will informed about the command line option

-S <sqlmode>           either "INTERNAL" or "SAPR3" 

 

but you will not be informed about the use case for SQL mode "SAPR3". Usinghdbsql for this reason can be a bit tricky and if you search in the SCN Forums, you will find that at times the tool causes some confusion and bewilderment.

 

Getting Started

 

The first tutorial video shows how to get started with the SAP HANA database interactive terminal.

 

There are hdbsql command line options; for the help file use option -h from the Linux shell

hdbadm> hdbsql -h

 

and there are hdbsql commands, which are displayed with \h or \? from the hdbsql prompt

 

hdbsql> \h

 

As with the h for help, commands and command line options sometimes use the same letter but mostly not and note that the command line options are case sensitive. The table below shows some examples.

 

UsageOptionCommand
Help screen-h\h
Connect\c
Disconnect\di
Exit\q
Execute\g or ;
Status\s
Input file-I (uppercase i) <file>\i <file>
Output file\e\o <file>
Multiline mode-m

\mu

SQL mode-S\m
Autocommit-z\a
Separator-c <separator>

 

 

Installing a License File

 

One good use case scenario for hdbsql is installing a license file. Say, you just have installed an SAP HANA system on a slim SUSE or RedHat Linux server without any (X-Windows) graphical environment. There is no Windows client at hand either. How to proceed? Simple! The interactive terminal and the command

 

SET SYSTEM LICENSE ' <license file contents>'

 

In the tutorial video below, I show you how this is done:

 

 

Secure User Store

 

The next video explains how to use the secure user store. This allows you to store SAP HANA connection information, including user passwords, securely on clients. In this way, client applications can connect to SAP HANA without the user having to enter host name or logon credentials.

 

To connect to the SAP HANA database you need to provide username and password

 

hdbadm> hdbsql -u DBA -p P@$w0rd!

 

or

hdbsql> \c -u DBA -p P@$w0rd!

 

The SAP HANA connection defaults to <localhost> with port 30015. So the connect strings above will only work if you execute them on the SAP HANA server with instance number 00. For the other 99.99% of the cases, you will need to provide the "node" information with -n or \n (or just the instance number when on "localhost"). In case of multitenant databases, add -d or \d with database name.

 

hdbadm> hdbsql -u DBA -p P@$w0rd! -n <host>[:<port>] -i <instance number>

or

 

hdbsql> \c -u DBA -p P@$w0rd! \n <host>[:<port>] \i <instance number>

 

If you just provide the username and leave out the password (option or command), you will be prompted to enter it interactively. This is a good practice, of course, as you do not want the password recorded in the history file of the Linux shell or displayed on the screen. However, for any batch job or Windows service connection, this will not suffice and typically you will want to work with the secure user store.

 

In the secure user store the connection string (host:port + user + password) is safely stored with the password encrypted. By default, only the file owner can access this file and you can use this to connect to the SAP HANA database from a script file (backup from cron) or as Windows service (without interactive logon).

 

With a key in the secure user store, you can now connect using command \U or option -U

hdbadm> hdbsql -U KEYNAME

or

 

hdbsql> \c -U KEYNAME

 

The KEYNAME is case sensitive and is the name or alias given to a particular connection string (host:port user password). You can store as many strings as needed. As mentioned, the password is stored encrypted and cannot be extracted in any way.

 

The tool to manage keys in the secure user store is called hdbuserstore and is documented in the Security Guide: hdbuserstore Commands - SAP HANA Security Guide - SAP Library

 

 

Working with Input Files

 

When working with hdbsql it is often convenient to use input or script files. This avoids typø errors and allows for scheduled execution. However, there is another good reason to use an input file. Say, you want to perform a select on a tables with a namespace between slashes:

 

SQL> SELECT count(*) from /ABCD/MYTABLE

 

On the command line prompt this would cause an issue and the slash (/) is considered a special character by the shell. So you would have to escape it with a backslash

 

hdbadm> hdbsql -U KEYNAME "SELECT count(*) from \/ABCD\/MYTABLE"

 

Unless you like ASCII art you probably will get tired of this very soon. Best to use an input file here.

 

Another use case is when you want to execute a procedure that contains declarations and statements terminated with a semi-colon (;). The semi-colon is also the default separator for hdbsql so it will start to execute your procedure after the first declaration. How to solve this?

 

DO

BEGIN

  DECLARE starttime timestamp;

  DECLARE endtime timestamp;

  starttime = CURRENT_UTCTIMESTAMP;

 

select count(*) FROM sflight.sairport;

 

  endtime =  CURRENT_UTCTIMESTAMP;

  SELECT

  :starttime AS "Start Time"

  , :endtime AS "End Time"

  , NANO100_BETWEEN( :starttime, :endtime)/10000 AS "Elapsed Time (ms)" FROM DUMMY;

END

#

 

Again, use an input file. End you procedure with another character, for example "#" and then start hdbsql with the command line option -c "#" and -mu for multiline mode. By default, hdbsql runs in single line mode, which means that it will send the contents of the buffer to the SAP HANA server when you hit the Return key.

hdbadm> hdbsql -U KEYNAME -i /tmp/inputfile.sql -c "#" -mu

 

In the video below, I show you some examples of working with input files:

 

 

 

SQLMode = SAPR3

 

But what about SQLMode? In the hdbsql command line reference, a number of commands and command line options are listed, which are a little less obvious.

 

Fortunately, SAP HANA expert and SCN Moderator Lars Breddeman was willing to share his knowledge with me on the more obscure parameters and options. Thanks Lars!

 

sqlmode (internal / SAPR3) - The SQLMODE changes how semantics (NULLs, empty strings, trailing spaces, etc.) are handled. This is typically used with the SAP NetWeaver Database Shared Library (DBSL). DB Connect Architecture - Modeling - SAP Library Possible use cases are development and support.

 

auto commit - Same functionality as in the SAP HANA Studio: if you ever want to execute more than one command in the same transaction, you need to switch from autocommit ON to OFF and manually COMMIT or ROLLBACK. Also: all things concerning transaction management, like locking or MVCC, can only be demonstrated if you don’t immediately commit.

 

escape - Escape ON makes sure that all values are enclosed into “ ” (default setting). If you want export the output to a file and copy the contents of that file into Microsoft Excel, this causes all values to be interpreted as text. If you want your number to be numbers and dates to be dates, set escape OFF

 

prepared statements - Prepared statements behave a little bit differently internally, especially for one-off statements simply executing those statements without explicit prior preparation and invocation via parameters might be beneficial. Also, MDX statements are not executable via prepared statements. So, if you want to quickly run a MDX statement from hdbsql you’d have to switch off the usage of prepared statements. This is also how it is done in SAP HANA Studio.

 

saml-assertion - New with SAP HANA SPS 10 is that you can now authenticate using a SAML assertion. Hdbsql is a good tool to test SAML implementations.

 

SSL options - This allows for encrypted communication with the SAP HANA database. This needs to be configured on the server side first. The client parameters really deal with the certificate storage on the client side.

 

 

More Information

 

SAP HANA Academy Playlists (YouTube)

SAP HANA Administration - YouTube

SAP HANA database interactive terminal - YouTube

 

Product documentation

SAP HANA HDBSQL (Command Line Reference) - SAP HANA Administration Guide - SAP Library

Secure User Store (hdbuserstore) - SAP HANA Security Guide - SAP Library

hdbuserstore Commands - SAP HANA Security Guide - SAP Library

Install a Permanent License - SAP HANA Administration Guide - SAP Library

 

SCN Blogs

Backup and Recovery: Scheduling Scripts - by the SAP HANA Academy

SAP HANA Academy: Backup and Recovery - Backup Configuration Files

SAP HANA Client Software, Different Ways to Set the Connectivity Data

Primeros pasos con SAP HANA HDBSQL

 

 

Thank you for watching

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

How to screw up your HANA database in 5 seconds

$
0
0

I like the SAP HANA database. I really do. Writing demanding SQL statements has never been so much fun since I throw them at SAP HANA. And the database simply answers, really quickly. While the database itself works fine, from time to time I stumble upon some strange issues around HANA administration where I notice that SAP HANA is still a quite new database. In certain cases the database is in real danger, so I want to share with you a perfidious trap.

 

You remember that starting with SAP HANA revision 93, a revision update automatically changed the database from the standalone statisiticsserver to the embedded statisticsserver? You could in theory keep the standalone statisticsserver, but I believe no one actually did this. So did you ever wonder why the systemOverview.py script provides this irritating warning?

statisticsserver.pngI double-checked this on revision 111. The warning is still there. Now you could say, this is a harmless warning and should be ignored. Since SPS09 a standalone statisticsserver is against the clear recommendation from SAP. However, what if some lesser experienced HANA administrator sees this message, takes it seriously and tries to start the standalone statisticsserver anyway?

 

TL;DR:DO NOT DO THIS!

 

First of all, SAP did not yet remove the hdbstatisticsserver binary from the IMDB_SERVER.SAR packages. It is still available, even in revision 112.

statisticsserver2.png

However, it should not be possible to run it if you use the embedded statisticsserver, right? Starting the standalone statisticsserver in this scenario should result in an error message and no harm be done? Well, not quite. So far the topology for my HANA instance looks like this:

m_services1.png

And now I screw up my HANA database via one simple command:

statisticsserver3.png

Oh no! What have I done? When checking the trace file of this new process, it detects the embedded statistics server and disables itself, but only after the topology was already botched up.

 

[31147]{-1}[-1/-1] 2016-03-22 10:16:36.813528 i StatsServ    StatisticsServerStarter.cpp(00081) : new StatisticsServer active. Disabling myself...
[31147]{-1}[-1/-1] 2016-03-22 10:16:36.834024 i StatsServ    StatisticsServerStarter.cpp(00096) : new StatisticsServer active. Disabling myself DONE.
[31147]{-1}[-1/-1] 2016-03-22 10:16:36.836820 i assign       TREXIndexServer.cpp(01793) : assign to volume 5 finished

 

 

 

So I stop the ominous process asap:

statisticsserver5.png

However, in M_SERVICES I still see the "new" service! This is not nice. How do I clean up this mess?

m_services2.png

m_volumes.png

 

This is not just a cosmetic issue. Important systems are protected by HANA system replication. Now this new (but inactive) service breaks the system replication! This is really bad:

replication1.png

 

 

How can we fix the system replication? Let's try the obvious way on the secondary site:

HDB stop

hdbnsutil -sr_unregister

hdbnsutil -sr_register --name=site2 --mode=sync --remoteHost=eahhan01 --remoteInstance=10

HDB start

 

The procedure seems to work. Unfortunately this does not really reinitialize the replication, because if I try a takeover then I get this error:

takeover.png

 

I cannot even perform a backup on the primary site, because that stupid statisticsserver is not active. Dang!

 

If you have been curious and screwed up your crash&burn instance, then you can try to fix the situation with such commands. Proceed at your own risk:

ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','host','eahhan01') UNSET ('statisticsserver','instances') WITH RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini','system') UNSET ('/host/eahhan01','statisticsserver') WITH RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini','system') UNSET ('/volumes','5') WITH RECONFIGURE

For more details, have a look at SAP notes 1697613, 2222249, 1950221.

 

Now the Python script shows that the system replication looks fine again:

replication2.png

 

IMPORTANT: Never solely rely on the output of this check script or what you see in the HANA studio on system replication. I recommend to test the takeover after all changes of the topology. It might happen that all lights are green and nevertheless the takeover fails after some topology change.

 

 

Hopefully SAP will remove the false warning about a missing statisticsserver in script systemOverview.py soon. Given their strong commitment to backwards compatibility for SAP HANA, I doubt they will remove the standalone statisticsserver altogether.

High Availability and Disaster Recovery with the SAP HANA Platform on openSAP

$
0
0

Companies have become more dependent upon their IT infrastructure and systems to perform important business tasks throughout their business days as well as outside business hours. It is critical that their IT systems do not fail and are available 24/7 to all end users. With High Availability and Disaster Recovery with the SAP HANA Platform, companies can be assured that their systems are always available. Learners are invited to join the latest openSAP course, High Availability and Disaster Recovery with the SAP HANA Platform, in the SAP HANA Core Knowledge Series, starting this May.

 

High availability (HA) measures the system’s ability to remain accessible in the event of a system component failure and avoid system down times, ensuring the system is always available. Disaster Recovery ensures that, in the unlikely event of a system failure, the system will not lose any data and will be restored fully to its original state. Both of these capabilities are included in SAP HANA Platform with multiple options to select from a recovery time objective (RTO) and recovery point objective (RPO) perspective.

 

The course, High Availability and Disaster Recovery with the SAP HANA Platform, will provide a general overview of SAP HANA platform high availability and disaster recovery features. The course will also expand on the capabilities and demo videos will be provided to showcase the configurations and setup required for operations in a data center environment. The course will run over a three week period and is aimed at enterprise architects, system administrators, and database administrators. Learners taking part in this course should have a basic knowledge of database and software/hardware systems.

 

Enrollment is now open for High Availability and Disaster Recovery with the SAP HANA Platformand is open to everyone interested in learning about this topic and all you need to sign up is a valid email address.

 

As with all openSAP courses, registration, enrollment, and learning content are provided free of charge.

Other upcoming and current courses include:

Build Your Own SAP Fiori App in the Cloud – 2016 Edition

Software Development on SAP HANA (Delta SPS 11)

Implementation of SAP S/4HANA

Implementation Made Simple for SAP SuccessFactors Solutions

Digital Transformation Across the Extended Supply Chain

SAP Business Warehouse powered by SAP HANA (Update Q2/2016)

Sustainability Through Digital Transformation

SAP HANA Cloud Platform Essentials (English)

SAP HANA Cloud Platform Essentials (Japanese)

[SAP HANA Academy] Learn How to Use Core Data Services in SAP S/4 HANA

$
0
0

In a five part video series the SAP HANA Academy's Tahir Hussain Babar (Bob) walks through how to set up and use Core Data Services in SAP S/4 HANA.


Introduction to CDS - Creating a CAL Instance and Creating an ERP User


In the first video of the series Bob details how to get a S/4 HANA instance and how to create the S/4 HANA user that's necessary for executing the tasks preformed in the series.

Screen Shot 2016-03-21 at 11.05.25 AM.png

There are a few prerequisites before you can start this series. First, it's assumed that you already have a SAP Cloud Appliance Library account and that you have instantiated a solution with an image called S/4 HANA, on premise edition - Fully Activated. When you click on create instance you are creating a few machines: SAP BusinessObjects BI Platform 4.1 server, SAP ERP 607 server on SAP HANA SPS09, and SAP HANA Windows client. Essentially, you're building three separate servers.

Screen Shot 2016-03-21 at 11.30.09 AM.png

Once you have created this instance, if you move your cursor over the solution title, there is a link to a Getting Started document. It's assumed that you've followed all of the steps detailed in the document in order to instantiate your own instance of S/4 HANA.


When you log into your Windows instance there will be a pair of tools that you will utilize. One is Eclipse, which is used for development and where you will build the CDSs. Please make sure you're using the latest version of Eclipse. You should have the HANA Development tools, including the ABAP perspective, updated on a regular basis. The other tool is the SAP Login which is used to access the SAP ERP system.


Open Eclipse and then click on Window > Perspective > Open Perspective > Other and choose ABAP to open a new ABAP perspective.


To create a new ERP user with all rights, first, open the SAP Logon and log into client 100, which is a preconfigured S4 client, with the default user and password. Next, run the command /nus01 to create a new user.

Screen Shot 2016-03-21 at 4.54.04 PM.png

Bob names his user SHA and then clicks the create button. On the Maintain Users screen you must provide a last name in the Address tab and change the default password in the Logon Data tab. Also, in the Profiles tab, you must add SAP_ALL (have all SAP System authorizations) and SAP_New (have new authorizations checks). This essentially creates a copy of the default client 100 user so you will have enough roles and rights to preform the tasks carried out later on in the series. Click on the save icon to finish creating the new user.Screen Shot 2016-03-21 at 5.05.35 PM.png

Next, click on the orange log off button and log in as the new user. As it's the first time you're logging in as the new user you will be prompted to change your password.


How to Create an ABAP Project and Load Demo Data

 

Below in the second video of the series Bob details how to create a new ABAP project within Eclipse and how to load demo data.

Screen Shot 2016-03-21 at 5.09.12 PM.png

In the ABAP perspective in Eclipse right click on the projects window and select New > Project > ABAP > ABAP Project and then click Next. Choose to define the system connection manually. Enter your System ID - Bob's is S4H. The Application Server is the IP address of your ERP system. Also, enter your Instance ID (Bob's is 00) before clicking Next. Enter your Client (100) user name, password and preferred language code (EN for English) before clicking on finish. Now you have created a ABAP project within your ERP system using your recently created user and connection.


Drilling down into Favorite Packages will show the temp package ($TMP-SHA) that has been created. S/4 HANA is installed, so there are already a ton of existing CDS views. Scroll down into the APL package and search for and open the ODATA_MM folder. Your CDSs are stored in a folder in the ODATA_MM_ANALYTICS package. Two sub folders exist. Access Controls, which deals with security, and Data Definitions, where you build CDSs.

Screen Shot 2016-03-21 at 9.43.45 PM.png

The CDS highlighted above, C_OVERDUPO, is the CDS in the overdue purchased orders tiles used in the SAP Fiori launchpad.


Back in the SAP GUI log in as the new user you recently created. To load some demo data run the command /nse38. Next, choose SAPBC_DATA_GENERATOR. You will be using S-Flight, which creates a series of tables and BAPIs that enable you to test your ERP system using an airline's booking system's flight data. Next, hit the execute button and select the Standard Data Record option before hitting execute again.

Screen Shot 2016-03-21 at 10.02.50 PM.png

To see the data run the command /nse16. Enter SCARR for the table name and then click on the object at the top left to see the data.

Screen Shot 2016-03-21 at 10.04.49 PM.png

How to Create Interface Views

Screen Shot 2016-03-21 at 10.06.28 PM.png

Bob shows how to create basic, aka interface, views in the series' third video. You will be building a CDS view on top of the data contained in the Demo Data table called SCARR. The table lists the various Airline carriers, the carrier ID and the currency code. You will be exposing this data via OData as a gentle introduction into the CDS concept.


CDSs aren't written in ABAP but the objects will exist in the ABAP repository. Essentially, it is a combination of both OpenSQL and a various list of annotations. The annotations further define the view as well as all of the data elements within that CDS.


In Eclipse right click on the empty package in the ABAP project and select New > Other ABAP Repository Object > Core Data Services. The two options available are DCL Source and DDL Source. DCL Source is used for security by enabling you to preform role-level security. Bob opts for the other option and selects DDL Source before clicking Next.


Bob enters Airline, private view, VDM interface view as his description. However, when you build CDS views they will share a namespace and therefore should not interfere with productive or delivered views. Basically, you must utilize a naming convention.


So Bob names his CDS view ZXSHI_AIRLINE. ZX means it's a development workspace. SH is the first two letters of his user name. I means that it's a basic view. A basic view hits the raw data in your tables. In-between there will be a series of other views with consumption views at the top. Analytics or OData are exposed to consumption views. 

Screen Shot 2016-03-21 at 10.32.34 PM.png

After clicking on Next, Bob will select his Transport Requests. Transport Requests enable you to move content from system to system and can be used for productive CDS views. However, as these are local CDS views, you won't need any Transport Requests. Clicking Next brings you to the list of Templates. These cover the most common use cases such as joins between different tables or associations. Click Finish to create the CDS view.


Several default annotations are created with the CDS. First, change the define view name in line 5 to match ZXSHI_ARILINE. Next, hitting control+space next to as select from will bring up code completion so you can find the scarr data source. The bottom left hand side shows an outline for the query that Bob is building up.

Screen Shot 2016-03-21 at 10.57.04 PM.png

On line 6 you will need to select the column. Press control+space and choose scarr.carrid and then add as Airline. Also, add scarr.currcode as AirlineLocalCurrency and scarr.url as AirlineURL.

Screen Shot 2016-03-22 at 9.26.11 AM.png

The first annotation is @AbabCatalog.sqlViewName and will, essentially, be the same name as the view but without any underscores. So enter ZXSHIAIRLINE. To check the view click on the save button and then drill into the view located in the Data Definitions folder in the CDS folder. Activate the package and then open a data preview on the ZXSHI_AIRLINE CDS to see the list of airlines, currencies and URLs.

Screen Shot 2016-03-22 at 9.51.35 AM.png

Another annotation that must be changed is @EndUserText.label on line 4. You should replace the existing technical term with just Airline as it is more readable. This text label is exposed on objects within your OData services. Next, add an annotation to signal that this is a basic/interface view by typing @VDM.viewType: #BASIC on a new line.

Screen Shot 2016-03-22 at 10.02.15 AM.png

Basic views are private as the end user never accesses the system directly. Another type of view is a Composite, which is a basis underlying view for Analytics. It is used to combine different views via associations. The Consumption view is an end user view, which is accessible through an analytical front-end or is used for publication to OData.


Bob adds an additional annotation, @Analytics.dataCatagory: #DIMENSION, in another line so analytics can be used with these views. This indicates that it will be a dimension type table.

Screen Shot 2016-03-23 at 2.01.41 PM.png

There is also a different set of annotations that you will see inside a select statement. For example, the annotation @semantics won't appear when you try to enter it with all of the other annotations at the top. However, it will appear and can be entered when you type it within the select statement at the bottom of the CDS. Bob adds the annotation @Semantics.CurrencyCode: true above his carr.currcode line to imply that it is a currency code. Bob also adds @Semantics.url to infer that the line below is a url.

Screen Shot 2016-03-25 at 10.06.24 AM.png

You must define a key if you want to expose the CDS as OData. The carrier ID will be the primary key, so Bob types key in front of the scarr.carrid as Airline.


When you first created this DDL, there was the other option of creating a DCL instead. A DCL involves access controls as you can define which user will have access to which data in a specific table. Currently there are no DCLs, so the annotation @AccessControl.authorizationsCheck: #CHECK needs to be changed to @AccessControl.authorizationsCheck: #NOT_REQUIRED. Therefore, in this example there will be no role level security.

Screen Shot 2016-03-25 at 12.47.23 PM.png

Full Syntax - Interface View

-----------------------------------------------------------------------------------------------------------------------------------------

@AbapCatalog.sqlViewName: 'ZXSHIAIRLINE'

@AbapCatalog.compiler.compareFilter: true

@AccessControl.authorizationsCheck: #NOT_REQUIRED

@EndUserText.label: 'Airline'

@VDM.viewType: #BASIC

@Analytics.dataCategory: #DIMENSION

define view ZxshI_Airline as select from scarr {

key scarr.carrid as Airline,

@Semantics.currencyCode: ture

scarr.currcode as AirlineLocalCurrency,

@Semantics.url:true

scarr.url as AirlineURL

}

-----------------------------------------------------------------------------------------------------------------------------------------

 

Once the CDS looks like the syntax above click on the save button. Activate the CDS and then open a data preview on it to verify there is data in it.


How to Create a Consumption View

Screen Shot 2016-03-25 at 12.51.00 PM.png

In the fourth video in the series Bob walks through how to create a consumption view. Normally there would be many interface views so there would be a full breadth of dimensions and facts available for the analytics tools. You can use associations to join data from multiple basic views together in a composite view. In this simple demo Bob skips building a composite view and chooses to build an consumption view which he will expose to OData.


Bob right clicks on his data definitions folders and chooses to build a new DDL Source. Bob names his view ZXSHC_AIRLINEQUERY. ZX is for development view, SH are the initials for Bob's user and C is for consumption view. For the description Bob enters Airline query, public view, VDM consumption view and then clicks next. Bob leaves the default for Transport Request and chooses a basic view without any associations or joins before clicking finish to create his consumption view.

Screen Shot 2016-03-25 at 1.57.33 PM.png

First, Bob changes line 6 to define view zxshc_Airlinequery as select from zxshI_Airline {. Next, Bob modifies his sqlViewName in line 1 to 'ZXSHCAIRLINEQ'. Keep in mind that this view name can only be 16 characters long. Then, Bob modifies the annotation in line 4 to read @EndUserText.label: 'Airline'. After, Bob marks that it's a consumption view by adding a new annotation, @VDM.viewType: #CONSUMPTION, underneath.


Next, Bob selects the columns (Airline, AirlineLocalCurrency and AirlineURL) by pressing control+space on line 7 and chooses Insert all elements - Template. Finally, you must add an annotation beneath the consumption view annotation to expose the view as OData. Bob enters @OData.publish: ture.

Screen Shot 2016-03-29 at 9.21.13 AM.png

Save and then activate the consumption view CDS. After activation, you will see that an error has occurred. Hovering the cursor over the caution marker will inform you that there is a missing key element in the ZXSHCAIRLINEQ view.

Screen Shot 2016-03-29 at 10.31.27 AM.png

To fix it, add key at the beginning of line 8 before ZxshI_Airline.Airline. Then save and activate.


Now if you hover the cursor over the OData line it will inform you that the activation needs to be done manually through the /IWFND/MAINT_SERVICE command in ERP. Finally, Bob changes the annotation for the authorizationCheck to #NOT_REQUIRED.


Full Syntax - Consumption View

-----------------------------------------------------------------------------------------------------------------------------------------

@AbapCatalog.sqlViewName: 'ZXSHCAIRLINEQ'

@AbapCatalog.compiler.compareFilter: true

@AccessControl.authorizationCheck: #NOT_REQUIRED

@EndUserText.label: 'Airline'

@VDM.viewType: #CONSUMPTION

@OData.publish: true

define view zxshc_Airlinequery as select from ZxshI_Airline {

key ZxshI_Airline.Airline,

ZxshI_Airline.AirlineLocalCurrency,

ZxshI_Airline.AirlineURL

}

-----------------------------------------------------------------------------------------------------------------------------------------


Creating OData Services

Screen Shot 2016-03-29 at 11.23.16 AM.png

In the fifth and final video in the series Bob shows how to create OData services from the interface and consumption views he recently created based on CDSs from S/4 HANA.


Bob will now have to execute the command /IWFND/MAINT_SERVICE within his ERP system to register the OData service. In the SAP GUI, go to the top level of the ERP and enter the command /IWFND/MAINT_SERVICE. If you get an error informing you to log in but you are already logged in as your user, then place a n in front of the command. So it will display /nIWFND/MAINT_SERVICE before you press enter.


You will register the CDS consumption view you built in the ABAP repository and expose it as OData on the Activate and Maintain Services page. The service that you must add is listed in Eclipse when you click on the maker next to the OData annotation in your consumption view. The service is called ZXSHC_AIRLINEQUERY_CDS. Please copy it.

Screen Shot 2016-03-29 at 2.54.25 PM.png

Back on the Activate and Maintain Services page, click on the Add Service button. For the System Alias choose LOCAL_PGW as it's the S/4 HANA trusted service. Paste in your copied service as the Technical Service Name. Then, hit the Get Services button on the top left hand side. Next, select ZXSHC_AIRLINEQUERY_CDS as the backend service and click on the Add Selected Services button.

Screen Shot 2016-03-29 at 2.59.50 PM.png

The only item that needs to be addressed on the Add Service window that pops up is the Package Assignment under the Creation Information header. Clicking on Local Object will link the package name where which you created the CDS in Eclipse ($TMP) to the service in your ERP system. Then, click the execute button and you will get the message that the service was created and its metadata was loaded successfully.

Screen Shot 2016-03-29 at 2.59.50 PM.png

To verify, go back into the ABAP perspective in Eclipse and click on the check ABAP development object button. It will now display a message that an OData service has been generate.

Screen Shot 2016-03-29 at 3.03.27 PM.png

If you click on the ODATA-Service link it will open a new window in your default browser and request that you log in with the appropriate user. Even if the extension of the OData reads sap-ds-debug=true your service is correctly exposed if it looks like the one displayed below.

Screen Shot 2016-03-29 at 3.07.44 PM.png

If you change the extension to $metadata than you can view the OData service's metadata. It shows the names of all of the columns and queries. If you append the query name at the end of the URL you can see the data for each of the 18 airlines.

Screen Shot 2016-03-29 at 3.11.04 PM.png

If you want to learn more about OData syntax please visit the documentation page at odata.org.


For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.

Parallelization options with the SAP HANA and R-Integration

$
0
0

Why is parallelization relevant?

 

The R-Integration with SAP HANA aims at leveraging R’s rich set of powerful statistical, data mining capabilities, as well as its fast, high-level and built-in convenience operations for data manipulation (eg. Matrix multiplication, data sub setting etc.) in the context of a SAP HANA-based application. To benefit from the power of R, the R-integration framework requires a setup with two separate hosts for SAP HANA and the R/Rserve environment. A brief summary of how R processing from a SAP HANA application works is described in the following:

  • SAP HANA triggers the creation of a dedicated R-process on the R-host machine, then
  • R-code plus data (accessible from SAP HANA) are transferred via TCP/IP to the spawned R-process.
  • Some computational tasks take place within the R-process, and
  • the results are sent back from R to SAP HANA for consumption and further processing.

For more details, see the SAP HANA R Integration Guide: http://help.sap.com/hana/SAP_HANA_R_Integration_Guide_en.pdf


There are certain performance-related bottlenecks within the default integration setup which should be considered. The main ones are the following:

  • Firstly, latency is incurred when transferring large datasets from SAP HANA to the R-process for computation on the foreign host machine.
  • Secondly, R inherently executes in a single threaded mode. This means that, irrespective of the number of CPU resources available on the R-host machine, an R-process will by default execute on a single CPU core. Besides full memory utilization on the R-host machine, the available CPU processing capabilities will remain underutilized.

A straightforward approach to gain performance improvements in the given setup is by leveraging parallelization. Thus I want to present an overview and highlight avenues for parallelization within the R-Integration with SAP HANA in this blog.

 

Overview of parallelization options

 

The parallelization options to consider vary from hardware scaling (host box) to R-process scaling and are illustrated in the following diagram

0-overview.png

The three main paths to leverage parallelization illustrated above are the following:

1.Trigger the execution of multiple R-calls in parallel from within SQLScript procedures in SAP HANA

2.Use parallel-enabled R functions and algorithms within an R-runtime process

3.Scale the number of R-host machines connected to SAP HANA for parallel execution (scale memory and add computational power)


While each option can be implemented independently of one another, they can as well be combined and mixed. For example if you go for (3)– scaling number of R-hosts, you need (1)– Trigger the execution of multiple R-calls, for parallelism to take place. Without (1), you may remain “only” in a better high availability/fault tolerant scenario.

 

Based on the following use case, I would illustrate the different parallelization approaches using some code examples:

A Health Care unit wishes to predict cancer patient’s survival probability over different time horizons, after following various treatment options based on diagnosis.  Let's assume the following information:

  • Survival periods for prediction are: half year, one year and two years
  • Accordingly, 3 predictive models have been trained (HALF, ONE, TWO) to predict a new patient’s survival probability over these periods, given a set predictor variables based on historical treatment data.


In a default approach without leveraging parallelization, you would have one R-CALL transferring a full set of new patient data to be evaluated, plus all three models from SAP HANA to the R-host. On the R-host, a single-threaded R process will be spawned. Survival predictions for all 3 periods would be executed sequentially. An example of the SAP HANA stored procedure of type RLANG is as shown below.

0-serial.png

In the code above 3 trained models (variable tr_models) are passed to the R-Process for predicting the survival of new patient data (variable eval). The survival prediction based on each model takes place in the body of the “for loop” statement highlighted above.

 

Performance measurement: For dataset size of 1.038.024 (~16.15 MB) observations and 3 trained Blob model objects (each~26.8MB), an execution time of 8.900 seconds was recorded.

 

There are various sources of overhead involved in this scenario. The most notable ones are:

  • Network communication overhead, in copying one dataset + 3 models (BLOB) from SAP HANA to R.
  • Code complexity, sequentially executing each model in a single-threaded R-process. Furthermore, the “for” loop control construct, though in-built into base R, may not be efficient from a performance perspective in this case.

 

By employing parallelization techniques, I hope to achieve better results in terms of performance. Let the results of this scenario constitute our benchmark for parallelization.

 

Applying the 3 parallelization options to the example scenario

 

1. Parallelize by executing multiple R-calls from SAP HANA

 

We can exploit the inherent parallel nature of SAP HANA’s database processing engines by triggering multiple R-calls to run in parallel as illustrated as above. For each R-call triggered by SAP HANA, the Rserve-process would spawn an independent R-runtime process on the R-host machine.

 

An example illustrating how an SAP HANA SQLScript-stored procedure with multiple parallel calls of stored procedure type RLANG is given below. In the example, one thought is to separate patient survival prediction across 3 separate R-Calls as follows:

 

1-1 Rlang.png

  • Create an RLANG stored procedure handling survival prediction for just one model ( see input variable tr_model).
  • Include expression “READS SQL DATA” (as highlighted above) in the RLANG procedure definition for parallel execution of R-operators to occur, when embedded in a procedure of type SQLScript. Without this instruction, R-calls embedded in an SQLScript will excute sequentially.
  • Then create an SQLSCRIPT procedure

1-2 SQLScript.png

  • Embed 3 RLANG procedure-calls within the SQLSCRIPT procedure as highlighted. Notice that I am calling the same RLANG procedure defined previously but I pass on different trained model objects (trModelHalf, trModelOne, trModelTwo) to separate survival predication across different R-calls.
  • In this SQLScript procedure you can include the READS SQL DATA expression (recommended for security reasons as documented in the SAP HANA SQLScript Reference guide) in the SQLSCRIPT procedure definition, but to trigger R-Calls in parallel it is not mandatory. If included however, you cannot use DDL/DML instructions (INSERT/UPDATE/DELETE etc) within the SQLSCRIPT procedure.
  • On the R host, 3 R processes will be triggered, and run in parallel. Consequently, 3 CPU cores will be utilized on the R machine.


Performance measurement: In this parallel R-calls scenario example, an execution time of 6.278 seconds was experienced. This represents a performance gain of roughly 29.46%. Although this indicates an improvement in performance, we may have theoretically expected a performance improvement close to 75%, given that we trigger 3 R-calls. The answer for this gab is overhead. But which one?

 

In this example, I parallelized survival prediction across 3 R-calls, but still transmit the same patient dataset in each R-call. While the improvement in performance could be explained, firstly, by the fact that now HANA transmits lesser data per R-call (only one model, as opposed to three in the default scenrio) and consequently the data transfer may be faster. Secondly, each model survival prediction is performed in 3 separate R-runtimes.

 

There are two other avenues we could explore for optimization in this use case scenario. One is to further parallelize R-runtime prediction itself (see section 2). The other is to further reduce the amount of data transmitted per R-call by splitting the patient dataset in HANA and parallelize the data transmitted across separate R-calls (see section 4).

 

Please note that without the READS SQL DATA instruction in the RLANG procedure definition an execution time of 13.868 seconds was experienced. This is because each R-CALL embedded in the SQLscript procedure is executed sequentially (3 R-call roundtrips).


2. Parallelize the R-execution itself by using parallel R libraries


By default, R execution is single threaded. No matter how much processing resource is available on the R-host machine (64, 32, 8 CPU cores etc.), a single R runtime process will only use one of them. In the following I will give examples of some techniques to improve the execution performance by running R code in parallel.

 

Several open source R packages exist which offer support for parallelism with R. The most popular packages are “parallel”, “foreach”, and “Rmpi”. On a single host you can use invoke parallel R-Runtimes using the “parallel” and “foreach” packages. More details on the different nature and capabilities of relevant R-packages are given in the attached paper.

 

The “parallel” package offers a myriad of parallel functions, each specific to the nature of data (lists, arrays etc.) subject to parallelism. Moreover, for historical reasons, one can classify these parallel functions roughly under two broad categories, prefixed by “par-“ (parallel snow cluster) and “mc-“ (multicore).

 

In the following example I use the multicore function mcLapply() to invoke parallel R processes on the patient dataset. Within each of the 3 parallel R-runtimes triggered from HANA, split the patient data into 3 subsets, then, parallelize survival prediction on each subset. See figure below.

2-1.png

The script example above highlights the following:

  • 3 CPU cores are used (variable n.cores)by the R-process
  • The patient data is split into 3 partitions, according to number of chosen cores, using the “splitIndices” function.
  • The task to be performed (survival prediction) by each CPU core is defined in function “scoreFun
  • Then I call the mclapply()split.idx) , how many CPU cores to use, and which function should be executed by each core.

 

In this example, 3 R-processes (master) are initially triggered in parallel on the R-host by the 3 R-calls. Then within each master R-runtime, 3 additional child R-processes (worker) are spawn by calling mclapply(). On the R-host, therefore, we will have 3 processing groups executing in parallel, each consisting of 4 R-Runtimes (1 for master and 3 for workers). Each group is dedicated to predict patient survival based one model. For this setup 12 CPUs will be used in total.

 

Performance measurement: In this parallel R package scenario using mclapply(), an execution time of 4.603 seconds was observed. This represents roughly 48.28% gain in performance over the default (benchmark) scenario and a roughly 20% improvement over the parallel R-call example presented in section 2.

 

3. Parallelize by scaling the number of R-Host machines connected to HANA for parallel execution

 

It is also possible to connect SAP HANA to multiple R-hosts, and exploit this setup for parallelization. The major motivation for choosing this option is to increase the number of processing units (as well as memory) available for computation, provided the resources of a single host are not sufficient. With this constellation, however, it would not be possible to control which R-host receives which R request. The choice will be determined randomly via an equally-weighted round-robin technique. From an SQLScript procedure perspective, nothing changes. You can reuse the same parallel R-call scripts as exemplified in section 1 above.

 

Setup Prerequisites

  • Include more than one IPv4 addresses in CalcEngine parameter cer_rserve_addressesindexserver.inixsengine.ini file (see section 3.3 of SAP HANA R Integration Guide)
  • Setup parallel R-Calls within as SQLSCRIPT procedure, as described in section

3-1 config.png


I configure 2 R-host addresses in the calcengine rserve address option shown above. While still using the same SQLScript procedure as in the 3 R-Calls scenario example (I change nothing in the code), I trigger parallelization of 3 R-calls across two R-host machines.


3-2 Parallel R -call.png

Performance measurement: The scenario took 6.342 seconds to execute. This execution time is similar to the times experienced in the parallel R-calls example. This example only demonstrates that parallelism works in a multi R-host setup. Its real benefit for parallelization comes into play when it believed the computational resources (CPUs, memory) available on one R-box are not enough.

 

4.  Optimizing data transfer latency between SAP HANA and R

 

As discussed in section 1, one performance overhead is in the transmission of the full patient data set in each parallel R-call from HANA to R (see example in section 1). We could further reduce the latency in data transfer by splitting data set into 3 subsets in HANA, then using 3 parallel R-calls we transfer each subset from HANA to R for prediction. In each R call, however, we would have to also transfer all 3 models.

An example illustrating this concept is provided in the next figure.

4-1 split in hana.png

 

In the example above, the following is performed

  • The patient dataset (eval) is split into 3 subsets in HANA (eval1, eval2, eval3).
  • 3 R-calls are triggered, each with the transferring a data subset together with all 3 models.
  • On the R-host, 3 master R-process will be triggered. Within each master R-Process I parallelize survival prediction across 3 cores using pair functions mcpallelel()/mccollect() for task parallelism in the “parallel” R-package from the  (task parallelism) as shown below.

4-2 parallelize in R.png


  • I create and R funtion (scoreFun) to specify a particular task. This function focuses on predicting survival based on one model input parameter.
  • For each call of mcparallel() function an R process is started in parallel and will evaluate the expression in R function definition scoreFun. I assign each model individually.
  • With a list of assigned tasks I then call mccollect() to retrieve the results of parallel survival prediction.

In this manner, the overall data transfer latency is reduced to the size of data in each subset. Furthermore, we still maintaining completeness of data via parallel R-calls. The consistency in the results of this approach is guaranteed if there is no dependency in the result computation for each observation in the data set.

 

Performance measurement: With this scenario, an execution time of 2.444 seconds was observed. This represents a 72.54% performance gain over the default benchmark scenario. This represents roughly 43% improvement over the parallel R-call scenario example in section 1, and a 24.26% improvement over the parallel R-runtime execution (with parallel R-libraries) example in section 2. A fantastic result supporting the case for parallelization.


Concluding Remarks


The purpose of this blog is to illustrate how techniques of parallelization can be implemented to address performance-related bottlenecks within the default integration setup between SAP HANA and R. The blog presented 3 parallelization options one could consider

  • Trigger parallel R-calls from HANA
  • Use parallel R libraries to parallelize the R-execution
  • Parallelize R-calls across multiple R-hosts.

 

While parallel R libraries help to improve performance of an R-runtime execution by increasing the number of runtime instances executing on the R-host (see section 2), the key finding is that the data transfer latency can be significantly reduced by splitting the data set in HANA, and then parallelize the transfer of each subset from HANA to R using parallel R-calls.



Other Blog Links

Install R Language and Integrate it With SAP HANA

Custom time series analytics with HANA, R and UI5

How to see which R packages are installed on an R server using SAP HANA Studio.

Quick SAP HANA and R usecase

Let R Embrace Data Visualization in HANA Studio

HANA meets R

Creating an OData Service using R

SAP HANA Operation Expert & Developer Summit 2016 (EMEA)

$
0
0

SAPHANAOPSDEVSUMMIT_BANNER_small.jpg

Join us and Get Ready for the Platform for Digital Transformation


May 11-12, 2016

Walldorf, Gemany


For the third year running, we are again looking for SAP HANA Operation Experts and for Developers to join us at the SAP HANA Operation Expert & Developer Summit 2016 in Walldorf, Germany. This exclusive get-together of SAP HANA experts leverages in-depth expertise, interactive feedback sessions and networking opportunities. We want to hear about your use cases, your challenges and your successes in developing and operating the SAP HANA Platform.

 

 

Get ready - Get firsthand information on latest SAP HANA development and administration topics.

 

Provide feedback - Your solid product feedback during the breakout sessions will help to improve future releases.

 

Get connected - Expand your network and join us on the evening of May 11th for a casual meet-up and dinner.

 

 

 

May 11th, Developer Summit + Evening Presentations and Networking Dinner

 

We are looking forward to talk to developers with experiences in building native applications and data models using SAP HANA. Contribute in discussions about the new SAP HANA XS Advanced developing and modelling environment, core data services concept, graphical modelling tools, and SQLScript. See full agenda attached.

 

The evening event with keynotes, customer experiences and a networking dinner will close the Developer Summit and open the Operation Expert Summit. See full agenda attached.

 

 

May 12th, Operation Expert Summit

 

We invite IT Architects and Administrators with experiences in operating SAP HANA in their daily work. We want to hear your feedback about cloud & virtualization, landscape and deployment options, mission-critical data center operations, big data, monitoring, administration, and security. See full agenda attached.

 

Get an impression of the 2015 SAP HANA Operation Expert Summit in Walldorf, Germany.

 

 

Regional SAP HANA Summits

SAP HANA Operations & Developers Summit, May 24-25, 2016 in US, Newtown Square, PA

SAP HANA Summits are planned this year for the APJ region. Please watch out for the announcements.

 

 

Registration

The sumit is fee of charge but do note that space is limited. Each attendance request will be given careful consideration and we will contact you with a confirmation email to attend the event. In case the number of attendance requests will exceed the capacity of the summit, it might become necessary to restrict the number of participants per customer. The event language is English.

 

Register here for an invitation

 

 

Agenda: see attached

SAML SSO setup and configuration

$
0
0

Hi All,

 

My name is Man-Ted Chan and I'm from the SAP HANA Product Support team. This post is just bring some attention to our SAML SSO setup and configuration documents on our trouble shooting wiki.

 

The wiki can be found here

 

SAP HANA and In-Memory Computing Troubleshooting Guide - Technology Troubleshooting Guide - SCN Wiki

 

Direct links:

 

SAML SSO for BI Platform to HANA V 1 0 0.pdf

SAML SSO for Analysis for Office to HANA V 1 0 0.pdf

 

 

Please note that we are looking to change these pdf files to wiki pages.

 

Also in the comments feel free to let us know some other type of docs you would like to see.

 

 

Thanks,

 


Man-Ted


SAP HANA Multitenancy Multi-tenant Database encryption with change of encryption root key for SYSTEMDB

$
0
0

Why This Blog:

 

In the course of my work, I am currently, amongst other things, setting up a SAP HANA system running multiple database tenants with high level of security.

 

In this case, the high level security measure is to enable Data Volume Encryption on the Hana system.

 

This is the first time I have enabled Data Volume Encryption with a Hana Multitenant Database.

 

After we have executed steps described in SAP HANA Administration guide, for enabling Data Volume Encryption, the alert '57' was raised in our SYSTEMDB reporting "Inconsistent SSFS". At this point our tenant DB was working without issues including backup. For system DB we were experiencing all symptoms reported by SAP Note 2097613.

 

2016-04-04_09-58-11.png

 

Supporting Documentation:

 

SAP HANA Security Guide

Section:

9 Data Storage Security in SAP HANA

 

SAP HANA Administration Guide

Sections:

4.4.1.2 Enable Data Volume Encryption Without System Reinstallation

4.4.2 Data Volume Encryption in Multitenant Database Containers

 

2097613 - Database is running with inconsistent Secure Storage File System (SSFS)

 

Assumption :

As part of the procedure you have option to change the encryption key. You have decided to change the encryption key of your SYSTEMDB.

You have just converted your Single node SAP HANA system to MDC. There is SYSTEMDB and Single Tenant running in our system.

You have fully encrypted both SYSTEMDB and Tenant DB.

You have change the root encryption of your tenant DB therefore you are not able to do restore of SSFS described in SAP Note above which would render the DB unusable.



Solution:


Reseting persistency information of SYSTEMDB in SSFS


Login to your SAP HANA system via <sid>adm user and execute following commands:

cdexe

./hdbcons

\e hdbnameserver <instance no.> <SAP HANA System name> - This will connect to nameserver of the "SYSTEMDB"

crypto ssfs resetConsistency - This command will reset the consistency information in the SSFS activating new key

SAP S/4HANA: How will you get there?

$
0
0

SCN - SAP S4HANA_Ramesh.pngSAP S/4HANA recently celebrated its first birthday, and, as all proud relatives are apt to do, I thought back on the accomplishments and earnings that this first year has brought us. First, as SAP’s greatest innovation since SAP R/3, the new suite has seen a remarkable rollout, with more customer interest and coverage by analysts and the media than anyone saw coming in this short period. I look forward to SAPPHIRE NOW, this May 17-19, where I will share some of TCS’ early customer successes as they migrate to SAP S/4HANA.

 

The last six months have been particularly busy for TCS and our SAP clients as we develop business cases demonstrating the benefits for customers to migrate to SAP S/4HANA, including: overall shrinking of their data footprint, lowering of development costs and reducing total cost of ownership. Here’s another benefit that is perhaps more subtle but very important for IT departments. Customers come to us with legacy systems that include massive customizations. We believe that perhaps as much as 70% of these customizations can be avoided with SAP S/4HANA, freeing up administrative and IT resources for higher uses. As the head of our TCS SAP Practice, Akhilesh Tiwari, recently shared on this blog, for all of these reasons and many more, we believe that SAP S/4HANA will be the big topic for discussion at this year’s SAPPHIRE NOW.

 

SAP S/4HANA: Making the business case

Let’s be clear: Transitioning an organization to SAP S/4HANA requires a considerable investment in effort, time, and money. Most SAP customers, I believe, know they will make the move at some point over the next one to five years. So the initial questions we often hear are around when should I start, how much will I pay and when do I start seeing the benefits?

Regardless of your starting point and systems, building a smart roadmap to migration is the first step to ensure the transition to SAP S/4HANA is smooth. TCS’ proven roadmaps help customers manage large, complex technology and business process transformations in a series of well-defined phases. We start by looking at a client’s business objectives and outlining a technology transition that gets them from their current state to full implementation in the required time period. But our guidance goes well beyond solving technology issues. We help our clients make the SAP S/4HANA business case for their organization. Stakeholders can see at any point along the timeline what costs will be involved, and the expected financial and business returns delivered as the system is implemented.

SAP S/4HANA Roadmap

SAP S/4HANA is proving extremely beneficial to our clients across industries who are challenged to build single balance sheets that accurately reflect multiple product lines. The power to do this is available for the first time with SAP S/4HANA. By giving an enterprise a real-time view into its on-the-ground financial condition, executives are empowered to make decisions about resource allocation much more quickly. The roadmap shows them how and when they will hit these milestones. In this way, TCS partners with customers to sell the project to financial stakeholders.

Our roadmaps also help customers take control of cost planning. We phase our implementation projects over years in order to spread out the cost. If a “big bang” deployment is too pricey, we can phase it out over a period of shorter go lives, shrinking costs into more bite-size chunks. We can also help the customer stage the implementation so that components with earlier payoffs are completed up front.

As I look back on this first year of SAP S/4HANA, I know that it cemented our commitment to help customers not only as technical partners but as partners in making the business case to their organizations. With more and more clients considering their migration plans, TCS is able to share industry-specific insights that help organizations meet their specific business objectives as they make the move to this powerful, next-generation business suite that will position their business for longer term growth and agility. I look forward to exchanging ideas on how to make the move to SAP S/4HANA successful. Please share your comments here and let me know if you would like to meet during SAPPHIRE NOWin Orlando, May 17-19.

HANA CatEye! Experimental Project with NodeJS + MongoDB + RaspberryPI3

$
0
0

Header.png

Hello Everyone,

In my previous article, I have explained BPC on HANA by using HANA objects and their advantages. This time i want to write about NodeJS which is going to be the primary Javascript runtime in HANA XS with SAP HANA SP11. Also I have developed a simple application by using NodeJS named as HANA CatEye.

I’m still beginner in NodeJS but after digging inside the technology, I found NodeJS very simple to learn and develop applications.

After i read NodeJS will be included in SAP HANA SP11 i decided learn more about it and  initially planned to develop a basic “hello world” example. Later on  i decided to build an application (HANA CatEye) which can be benefit for my future HANA projects.

One of my biggest problem during the development was keeping track of code changes. Sometimes me or my colleagues need to roll back the code to previous state due to user’s decision on calculation logic, bugs or performance reasons. What i need was a version control mechanism and being aware of any code change.
In HANA, there is already a standard version control mechanism and change management features but I didn’t find it easy and useful. Hence, HANA CatEye application is designed to backup HANA development objects to the local DB, create versions for the changed objects and code version comparison to see what has changed between versions.

It is built based on NodeJS, MongoDB and hosted on Raspberry PI3 which is a microcomputer and a half size of iphone6 that cost around USD 35.

What is NodeJS ?

It is a JavaScript runtime that uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Maybe many of you know that JavaScript run on browsers but there is an exception for NodeJS, it doesn’t need a browser to run (thanks to Google Chrome V8 engine) and it is commonly used run web servers and handling http requests/responses.

One of the major differences of NodeJS from client side scripting or many other programming languages is, asynchronous nature. For example, reading parameters line 27 and 28 would be execute sequentially, but I/O functions in lines 31 and 32 are triggered concurrently and they are not depended on each other’s result or response time.

So basically the whole point is it doesn't have to wait for I/O. Because of that Node scales well for I/O intensive requests and allows many users to be connected simultaneously, we can also extend it to the database access. Multiple requests can be send to HANA database in concurrent without interrupting the process flow of application logic.

NodeJS Modules/Packages

There are many libraries available to be installed over NodeJS to add new functionalities to the applications. For example, we can use module named “Express” as a web application framework to handle low level http library, “HDB” module to connect HANA server, Mongoose for our MongoDB database operations, Swig to render webpage and many other modules which has their own purposes. Even for the same purpose, you can find many modules work in different logic but does the same job. It is a very rich environment.

Installing a module is very easy by using NPM which is our package manager for NodeJS.For example HDB module installation for HANA connectivity is just one line in command tool.

-> npm install HDB
below is how i installed HDB to my project in raspberry.


After installing, connection also very easy (this is basic connection mode, so we just connect with username and password);

Currently i don’t have HANA SP11 installation, but in a real implementation we are not going to use HDB module for HANA connectivity which I mentioned above, we will be using SAP delivered modules that is under the installation folder for SP11.


Below is the usage of modules for both public and SAP delivered ones. We are using "require" command to include modules in our application.

We didn’t want the traditional application server — large-scale, stateful, necessarily takes a lot of memory

SAP HANA Product Management Director Thomans Jung said “We didn’t want the traditional application server — large-scale, stateful, necessarily takes a lot of memory. We wanted something lightweight and pass-through that was just there as part of the database, and that could do basic RESTful server enablement and web serving, and not a whole lot else”.

They are  not only making it lightweight but they are also simplifying the development model


NodeJS is very lightweight even it can run on Raspberry PI3 with MongoDB without any problem. Below is my tiny Raspberry PI3 that costs around 35 USD. It is running an OS called Raspbian which is a free operating system based on Debian optimized for the Raspberry Pi hardware. I placed a lego near it for you to compare the size. So far i deployed many applications on it and had no problem. 1 GB ram is already enough for NodeJS applications.

What is HANA CatEye ?

I started learning NodeJS from tutorials and paid more attention after i heard it will be in SAP HANA. Later I decided to build this application based on one of the tutorial related to CRUD operations on MongoDB that i found on the web.
Currently for my BPC on HANA development, I have more than 100 procedures. We keep adding and removing new lines from the current procedures based on needs. Keeping the track of changes is really difficult.

Here comes the CatEye, it connects the HANA server by using HDB module, checks the source code of existing procedure, creates a new version and save in to MongoDB which is also running on Raspberry PI3.MongoDB is a perfect fit for NodeJS applications. It lets us to focus on front-end rather than database layer.
Backup time took around 4 seconds for 100+ procedures, which is much faster than SAP Hana Studio :).

First of all, lets create a dummy procedures and see the result:-

We have created 2 procedures named ZTEST_GET_GUID and ZTEST_GET_MATERIAL.
Then, I open CatEye application and click on “Meow” button;

CatEye connected to HANA system and based on my selection filters on the backend, it will read the specific procedures. It will also count the local objects available in CatEye system which is either transferred from HANA or manually added.Application send 2 concurrent requests to read both from HANA server for the procedures to be transferred and MongoDB for the actual ones on our local.
Due to non-blocking architecture, those requests won’t stop the flow of application logic and whenever their response is sent, their result is loaded to the page separately.
After clicking Sync button, procedures are transferred to our local.

And lets check the objects on our local MongoDB.

For demo purpose, I changed ZTEST_GET_MATERIAL procedure and also created a new procedure ZTEST_GET_PRICE  on HANA layer. Lets Sync again!

Now CatEye found a new procedure(ZTEST_GET_PRICE) and created it and also created second versions for the updated procedure(ZTEST_GET_MATERIAL). Lets use version comparison;


Red lines no more exist on new version and green line is the changed line. Now we can easily backup and track what has changed on procedures :).

You can find the video of the actual integration and version comparison here;

 

 

Overall, the project took 2 weeks to develop. I build from scratch and really had so much fun & learn many things during the development phase of this experimental project. There are several NodeJS modules used on the application, some examples are ; SAP HANA connectivity(hdb), managing asynchronous flows(async), notifications(toastr), parsing POST data(body), MongoDB operations(Mongoose), code highlighting(react-highligt) ,rending HTML pages(swig), showing code difference(diff) all has their own specific purpose. I asked to Thomas Jung if SAP is limiting us for any NodeJS modules/packages and he said there is no limitation but we are responsible for any bugs/security problems caused by the public module. I can not wait to use NodeJS in HANA SP11 with real scenarios

 

You can find samples done with SP11 by Thomas Jung in his GitHub page.
https://github.com/I809764ß

References:

Thomas Jung SP11 new developer features ; http://scn.sap.com/community/developer-center/hana/blog/2015/12/08/sap-hana-sps-11-new-developer-features-nodejs#comment-659207 .

Why HANA XS is moving to NodeJS ?  http://thenewstack.io/sap-unveils-hanas-full-featured-node-js-integration/

HANA change management and version control; http://scn.sap.com/community/hana-in-memory/blog/2015/02/24/step-by-step-guide-on-sap-hana-version-control-features

Top 10 reasons to use NodeJS :
http://blog.modulus.io/top-10-reasons-to-use-node

[SAP HANA Academy] Discover How to Setup and Use the KPI Modeler in SAP S/4 HANA

$
0
0

Over a series of five tutorial videos Tahir Hussain "Bob" Babar provides an overview on how to setup and use the KPI Modeler in SAP S/4 HANA. This series is part of the SAP HANA Academy's S/4 HANA playlist. These videos were made with the greatly appreciated help and assistance of Bokanyi Consulting, Inc.'s Frank Chang.


How to Set up a SAP S/4 HANA ERP User

Screen Shot 2016-03-30 at 4.06.00 PM.png

Linked above is the first video in the series where Bob details how to set up a SAP S/4 HANA ERP user. This is accomplished by copying the roles and profiles from an existing user. If you don't want to use your main BPINST user then please follow the steps Bob outlines.


First, log into SAP Logon. This is Bob's connection to both the back-end and the front-end server as he is using a central hub installation. Use 100, the pre-configured S/4 client, as the client and login with your BPINST username and password. Next, choose to run a SU01 - User Maintenance (Add Roles etc.) transaction from the SAP Easy Access screen. Then, choose to look at the BPINST user's rights and navigate to the Roles tab.

Screen Shot 2016-03-31 at 10.39.50 AM.png

Copy all of the roles and then launch a new window by running the command /osu01 to create a new user. Bob names his new user KPI and clicks on the new button. The only information you need to allocate in the Address tab is a last name. In the Logon Data tab enter a password. Then, in the Roles tab, paste in the roles you copied from the BPINST user. Be aware that sometimes all of the roles aren't copied. So double check to make sure that your new user has all of BPINST's roles.


Next, copy the first three profiles (SAP_ALL, SAP_NEW, S_A_SYSTEM) that are listed in the BPINST user's Profiles tab and paste them into the Profiles tab of your new KPI user.

Screen Shot 2016-03-31 at 11.30.33 AM.png

Now you have a duplicate of the BPINST user.


How to Change the SAP Fiori Launchpad with the Launchpad Designer

Screen Shot 2016-03-30 at 5.21.34 PM.png

In the second video of the series Bob provides an overview of the SAP Fiori Launchpad in SAP S/4 HANA. Also, Bob shows how to change the SAP Fiori Launchpad using the SAP Fiori Designer.


In a web browser log into the SAP Fiori Launchpad Designer with the recently created KPI user on Client 100. The SAP Fiori Launchpad Designer enables you to change the look and feel of certain tiles in your SAP Fiori Launchpad. A list of tiles is located on the right side of the SAP Fiori Launchpad Designer and a list of catalogs is along the left.

Screen Shot 2016-03-31 at 11.39.42 AM.png

The tool that the end-user will see is the SAP Fiori Launchpad for SAP S/4 HANA. Bob opens the SAP Fiori Launchpad in another tab. The example Bob shows of a SAP Fiori application is for Operational Processing. Clicking on the hamburger button on the left will open the Tile Catalog. Bob elects to open the KPI Design Catalog.

Screen Shot 2016-03-31 at 11.44.17 AM.png

To provide an example of what an end-user might experience, Bob opens the Sales - Sales Order Processing catalog and then opens the Sales Order Fullfillment All Issues tile. This gives the end user a normal tabular report on Sales Order Fullfillment Issues by connecting to a table located in SAP S/4 HANA through OData.

Screen Shot 2016-03-31 at 11.50.11 AM.png

Another tile, Sales Order Fillfillment Issues - Resolved Issues, has an embedded KPI which shows that there are 64 issues that need to be resolved on 29 sales orders.

Screen Shot 2016-03-31 at 5.41.48 PM.png

Back in the SAP Fiori Launchpad Designer, Bob searches for ssb in the Tile Catalog. Bob opens up the SAP: Smart Business Technical Catalog. This is where you can change the form of navigation for a tile including all of the options related to the KPI monitor. The KPI Design Catalog is very similar.

Screen Shot 2016-03-31 at 6.23.05 PM.png

The SAP Fiori Launchpad Designer is used to direct target navigation. To demonstrate, Bob searches for order processing and opens up the Sale - Sales Order Processing catalog. If you view the tiles in list format you will find an Action and a Target URL for each of the tiles. This informs you what will happen when the tile is selected. With the Target Mappings option you can define what will happen when you select a specific tile. You can also choose whether or not the tile can be viewed on a tablet and/or phone as well.

Screen Shot 2016-03-31 at 6.30.29 PM.png

How to Create and Secure a Catalog

Screen Shot 2016-03-30 at 5.22.00 PM.png

Bob details how to create a catalog in the series' third video. Bob also walks through how to secure the catalog so users who are on the SAP Fiori Launchpad can access it


To create a new catalog, first, click on the plus button at the bottom of the SAP Fiori Launchpad Designer. Bob elects to create a catalog using Standard syntax and gives his a title and an ID of ZX_KPI_CAT. Once the new catalog is created, click on the Target Mapping icon. You can create a new Target Mapping here but the simplest way is to copy a Target Mapping from an existing catalog. So Bob navigates to the Target Mapping for the Sales - Sales Order Processing catalog. Then, Bob selects the Target Mapping at the bottom that has * as its semantic object before clicking on the Create Reference button at the bottom.

Screen Shot 2016-04-01 at 11.43.25 AM.png

Selecting the catalog you've recently created (ZX_KPI_CAT) will create a Target Mapping in that catalog with the same rights as the semantic object you selected from the existing catalog. Now, back in the ZX_KPI_CAT catalog you can confirm that the Target Mapping of * has been replicated.

Screen Shot 2016-04-01 at 7.05.12 PM.png

Next, you must enable a user be able to access the catalog. So go back into SAP Logon and login as the KPI user on client 100. Running the command /npfcg will open up role maintenance. This is where you can build a role. Bob names his role ZX_KPI_CAT and selects single role. Bob duplicates the name as the description and saves the role. Then, in the menu tab, Bob chooses SAP Fiori Launchpad Catalog as the transaction. Next, Bob finds and selects his ZX_KPI_CAT in the menu for Catalog ID.

Screen Shot 2016-04-01 at 7.31.09 PM.png

This has built a role that grants access to the ZX_KPI_CAT catalog. Next, Bob opens the User tab and enters KPI as the User ID. Now, after saving, the KPI user can access the ZX_KPI_CAT catalog and the security has been fully setup.


Accessing Core Data Services

Screen Shot 2016-03-30 at 5.22.40 PM.png

In the fourth video of the series Bob shows how to access a Core Data Service. Core Data Services access the SAP S/4 HANA tables which are ultimately exposed as OData. For more information on how to build and use CDSs please watch this series of tutorials form the SAP HANA Academy.


First, in Eclipse, Bob duplicates the connection he's already established but opts to use the KPI user with client 100 instead of the his original SHA user. Now Bob is connected to the SAP S/4 HANA system as the KPI user. Next, Bob finds an already existing CDS by opening a Search on the ABAP Object Repository and searching for an object named ODATA_MM_ANALYTICS. Once the search has located ODATA_MM_ANALYTICS (ABAP Package), Bob opens it and navigates to it's Package Hierarchy in order to see its exact link.

Screen Shot 2016-04-05 at 10.04.58 AM.png

ODATA_MM_ANALYTICS is in a sub-package of APPL called ODATA_MM. Navigate to the ODATA_MM package from the System Library on the left-hand side and find ODATA_MM_ANALYTICS before adding it to your favorites. Opening the Data Definitions folder from the Core Data Services folder in the ODATA_MM_ANALYTICS package will show the pre-built Core Data Services. Bob opens C_OVERDUEPO. C_OVERDUEPO is a consumption view. So a BI tool will directly hit it.

Screen Shot 2016-04-05 at 11.25.05 AM.png

Another way to view a CDS's syntax is to right-click on it and choose to open it with the Graphical Editor. This depicts the logical view of the data. The C_OVERDUEPO view comes from the P_OVERDUEP01 view. This is a great way to track the data back to its source table.

Screen Shot 2016-04-05 at 5.11.00 PM.png

To check that the data from the C_OVERDUEPO CDS is correctly exposed as OData, Bob resets his perspective. Then, Bob right clicks on and opens OData Exposure underneath the secondary objects header in the outline. This opens the OData in a browser and Bob logins as the KPI user. To test, you can append $metadata to the end of the URL to see the various columns for the entities of the CDS view.

Screen Shot 2016-04-06 at 10.33.16 AM.png

Using the KPI Modeler

Screen Shot 2016-03-30 at 5.23.13 PM.png

In the fifth and final video of the series Bob details how to use the KPI Modeler.


First, Bob opens the KPI Design catalog in the SAP Fiori Launchpad and selects the Create Tile tile. Bob names it KPI Overdue PO and chooses C_OVERDUEPO as the CDS View for the Data Source. Then, Bob selects the corresponding OData Service and entity set called /sap/opu/odata/sap/C_OVERDUEPO_CDS and C_OverduePOResults respectively. For Value Measure Bob selects OverdueDays. Then, he clicks Activate and Add Evaluation.

Screen Shot 2016-04-06 at 10.53.19 AM.png

The evaluation is a filter that regulates what you want the data to show. Bob names the evaluation Last Year - KPI. For Input Parameters and Filters Bob elects to only display EUR as his currency and sets his evaluation period for 365 days. For his KPI Goal Type Bob keeps the default, Fixed Value Type. Bob sets his target threshold for 500, his warning threshold for 300 and his critical threshold for 100. Then, Bob clicks Activate and Configure New.

Screen Shot 2016-04-06 at 11.46.34 AM.png

There Bob is presented with various tile formatting options. In his simple demonstration Bob keeps the default tile configurations. Bob chooses ZX_KPI_CAT as his catalog before clicking on Save and Configure Drill-Down. Drill-Down determines what happens when the KPI is selected. Bob chooses to filter down with a Dimension of Material and a Measure of Overdue Days. This will create the chart depicted below.

Screen Shot 2016-04-07 at 10.21.27 AM.png

Bob gives his view a title of By Product and chooses to use Actual Backend Data. So when the tile is clicked on in the SAP Fiori Launchpad it will link to the chart. After clicking OK, Bob clicks on the + button at the top of the screen to add some of the various charts that are subsequently listed. The selections will appear when the tile is drilled into. You can add additional graphical options if you desire different views of the data. Bob selects two charts before clicking on Save Configuration.

Screen Shot 2016-04-07 at 10.33.49 AM.png

Back on the homepage of the KPI Design Window, Bob clicks on the pen object on the bottom right of the screen to configure what will be seen in the window. Click on the Add Group button and name it. Bob name's his KPI's Fiori Tile Group. Then, clicking the + button below the name allows you to add catalogs. It will load all of the catalogs your user has created. Bob adds the ZX_KPI_CAT catalog.

Screen Shot 2016-04-07 at 11.17.03 AM.png

Once you turn off edit mode you can view your Overdue PO tile.

Screen Shot 2016-04-07 at 11.19.32 AM.png

For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.

OpenSSL vulnerability DROWN attack CVE-2016-0800

$
0
0

An update for HANA users who want to know further on the OpenSSL DROWN attack.

 

SAP HANA and HANA based applications should not be affected by the DROWN vulnerability.


SAP HANA database uses SAP’s own CommonCryptoLib for communication encryption purposes, which is not affected by DROWN.

 

SAP HANA can be configured to use the OpenSSL instance which is provided by the Linux operating system (provided by Suse or RedHat). SSLv2 is not offered/used in these scenarios.

Therefore this configuration is also not affected by DROWN. Customers are advised to update their operating system according to their maintenance agreements with their operating system vendors. SAP explicitly allows customers to deploy security updates of the operating system.

 

More information:

http://service.sap.com/sap/support/notes/1944799 (SLES)http://service.sap.com/sap/support/notes/2009879 (Red Hat, see attached document)

SAP HANA extended application services, advanced model (XS Advanced) shipment contains OpenSSL for communication encryption. These channels do not support SSLv2 and are therefore not affected by DROWN.

Introducing SAP HANA Vora1.2

$
0
0

SAP HANA Vora 1.2 was released recently and with this new version we have added several new features to the product. Some of the key ones I want to highlight in this blog are

 

  • Support for MapR Hadoop distro
  • Introducing new “OLAP” modeler to build hierarchical data models on Vora data
  • Discovery service using open source Consul – to register Vora services automatically
  • New Catalog to replace Zookeper as metadatstore
  • Native persistency for metadata catalog using Distributed shared log
  • Thriftserver for client access thru jdbc-spark connectivity

 

The new installer for Vora in ver1.2 extends the simplified installer to be able to use Hadoop Management tools like MapR Control System to deploy Vora on all the Hadoop/Spark nodes. This is an addition to what was provided in ver1.0 for Cloudera Manager and Ambari admin tools.

 

pic1.png

 

Vora Modeler provides a rich UI to interact with data stored in Hadoop/HDFS, parquet, orc and S3 files by either using the SQL editor or the Data Browser. Once you have the Vora tables in place you can create “olap” models to build dimensional data structures on this data.

 

pic2.png

At the core of Vora we are looking to enable the distributed computing at scale when working with data both in SAP HANA and Hadoop/Spark environments. By pushing down processing of different algorithms to where the data is and by reducing the data movement between the two data platforms we deliver fast query processing and performance for extremely large volumes of data. We have also introduced new features like distributed partitions and co-located joins to achieve these performance optimizations.

 

HANA Vora went GA early March and we are seeing several customer use cases that enables BigData Analytics and IoT scenarios. If you are at ASUG/Sapphire during May2016, stop by to hear about real life customers discuss their implementations and gain insights from these technologies.

 

Vora Developer edition has been updated to ver1.2, you can access it from here

HANA MDC: Tenant crash while recovering other tenant on the same appliance

$
0
0

The blog post is to bring attention to an issue we have been facing on our HANA Multitenant Database container(MDC) setup

 

Background:

We have a Scale up MDC setup with more than 15 Tenant Database's in non prod on SPS10

Part of quarterly release activities we refresh non prod systems from production MDC tenant backups

Until last year we had less than 10 tenants and the regular refresh was working as expected

 

Issue:

We had introduced more non prod tenants end of last year and during the next refresh cycle we started noticing a tenant crash while we were working on refresh of another tenant

A complete check of trace logs of the crashed tenant confirmed we had signal 6 errors exactly around the same time the other tenant was being refreshed

After multiple attempts to being up the tenant did not work, we had to involve SAP support to check the cause of the issue

Meanwhile we restored the crashed tenant using backups

 

Cause:
SAP Support took more than a month to identify the cause of the issue and another occurrence of the same issue while restoring a different tenant confirmed there was a correlation

SAP confirmed the following, when we have more than 10 Tenants on a single MDC appliance we will come across this issue(on version SPS11 revision 112.02 and below)

For example if we have 15 tenants and lets say the tenant with Database ID 5 is being restored using a backup of production tenant it will impact the tenant with Database ID 15 and this tenant will crash and fail to start up. Same issue would occur on tenants with Database ID 13 and 14 if tenants with Database ID 3 and 4 are recovered using a backup

 

Resolution:

 

SAP has addressed the issue in SPS11 Database maintenance revision 112.02 that released today 12-Apr-2016

Please find the link below for the same and the screenshot that confirms the issue in the note

 

http://service.sap.com/sap/support/notes/2300417

MDC_Issue_Tenants.jpg

Please let me know if anyone has any thoughts or inputs on this issue and hope the blog is useful in understanding the cause of the issue and available solution


2016 ASUG Pre-Conference Seminar: Building the Business Case for SAP HANA

$
0
0

Are you exploring the possible benefits that SAP HANA may provide for your company? Are you confident there are strong use cases, yet challenged by putting together that all important Business Case to “sell it” internally? Then this session is for you!

 

Please join us for this interactive session where we will discuss how to prioritize your use cases and determine the critical value drivers to generate a Business Case that will resonate within your company.

 

The session also includes live customer insights, describing their personal experiences through this effort and how they successfully convinced their company of the value and benefits possible with SAP HANA through a solid Business Case.

 

The Agenda for this half-day Pre-Conference seminar includes:

  • Why do you need a business case anyway?
  • Methodology for building a business case
  • Levels of value
  • Value management life cycle
  • Create the storyline
  • Adding the financial dimension
  • Example of the process
  • Best practices approach
  • SAP Benchmarking
  • Bringing it all together
  • Customer testimonial

 

You can find more details about this Pre-Conference and Registration details at:

http://events.sap.com/sapandasug/en/asugpreconf.html#section_4

 

We look forward to meeting you at this ASUG Pre-Conference Seminar on Monday morning, May 16, in Orlando!

 

SAP HANA Solutions GoToMarket team

SAP Global HANA Center of Excellence team

*click* - *click* - *doubleclick* and nothing happens

$
0
0

Today's tidbit is one of those little dumb things that happen every now and then and when I think: "Great, now this doesn't work... WTF...?"

Usually that's a bit frustrating for me as I like to think that I know how stuff works around here (here, meaning my work area, tools, etc.).

 

So here we go. Since the SAP HANA Studio is currently not "an area of strategic investment" and a the Web based tools are on the rise, I try to use those more often.

I even have the easy to remember user-friendly URL (http://<LongAndCrypticNodeName.SomeDomainname.Somethingelse>:<FourDigitPortNumber>/sap/hana/ide/catalog/) saved as a browser bookmark - ain't I organized!


And this thing worked before.

I have used it.

So click on the link and logon to the instance and get this fancy "picture" (as my Dad would explain it to me -  everything that happens on the screen is a "picture", which is really helpful during phone-based intra-family help-desking...):

 

2016-04-14_22-17-36.gif

Pic 1 - The starting 'picture', looking calm and peaceful... for now

 

Ok, the blocky colors are due to GIF file format limitation to 256 colors, but you should be able to see the important bits and pieces.

 

There is some hard to read error message, that I choose to ignore and click on the little blue SQL button and then ... nothing happens.

I click again and again as if I cannot comprehend that the computer understood me the first time, but no amount of clicks yields to open the SQL editor.

What is going on?

Next step:

 

Do the PRO-thing...

     ... open Google Developer Tools...

     ... delete session cookies and all the saved information.

     ... Logon again.

 

Lo and behold, besides the much longer loading time for the page, nothing changed.

 

Great. So what's else is wrong? Did the last SAP HANA upgrade mess with the Web tools?

2.gif
Pic 2 - wild clicking on the button and visually enhanced error message indicating some bad thing

 

Luckily, that wasn't it.

Somewhere in the back of my head I remembered, that I had a couple of browser extensions installed.

 

Now I know what you're thinking: Of course it's the browser extensions. That moron! Totally obvious.

What can I say? It wasn't to me.

3.gif

Pic 3 - there's the culprit, the root cause and trigger for hours of frustration

 

It just didn't occur to me that e.g. the Wikiwand browser extension that I use to have the Wikipedia articles in a nicer layout would install a browser wide hook to the CTRL+CLICK event and that this would prevent the Web tools to sometimes not open.

After disabling this (there's a settings page for this extension) the Web tools resumed proper function.

Good job!

 

So is the Wikiwand extension a bad thing? No, not at all. There are tons of other extensions that do the same.

 

While I would really like to demand back the precious hours of my life this little mishap took from me, I assume that this request would be a bit pointless.

To me, at least, this experience, leaves me with the insight, that I clearly thought to simplistic about the frontend technology we use today. Web browsers are incredible far from a standard environment and controlling what the end user finally sees is not easy (of really possible).

 

Ok, that's my learning of the day.

 

Cheers,

Lars

 

p.s.

the error message "Could not restore tab since editor was not restorable" not only seems to be a tautology, but also had absolutely nothing to do with the problem in this case.

RANK Function by SQL & Calculation View

$
0
0

RANK  Logic by SQL RANK Function & SQL Logic  & Calculation View Graphical & CE Function.


Scenario:

⦁ Consider a Non SAP load (e.g: Flat file load )  which Full load daily , gets loaded into HANA table.

⦁ Because of Full load , we get daily all transactions uploaded into HANA table, unless we implement any pseudo delta logic  in source side.

⦁ We have chance of getting same transaction multiple times from source file , if there were multiple changes on any key figures for same Transaction ID.

⦁ For Example , Order 100000, on created on date have Order Qty as 10 KG.

⦁ On same day or next subsequent day for above transaction there is  increase  in order qty from 10 KG to 20 KG.

⦁ So from NON SAP source we get these transaction multiple times with old Value & new value Order Qty with Different Time Stamp.

 

Requirement

So our requirement to report only Transactions with latest time stamp data, which have most current Key figures from Source Data.

To achieve this RANK node can be useful in Calculation View.

This functionality is same as RANK function in SQL .

 

Column Table Creation:

    CREATE COLUMN TABLE <schema name>.SALES_FLAT

              ( SAELSORDER INTEGER,

               SALESITEM  SMALLINT,

               DOC_TYPE VARCHAR(4),

               MATERIAL_NUM NVARCHAR(18),

               ORDER_QTY TINYINT,

               UNIT VARCHAR(2),

               NET_VALUE DECIMAL(15,2),

               CURRENCY VARCHAR(3),

               CREATAED_AT TIMESTAMP

               );

.

Then we load Day 1 Records, ( here loading based on SQL , instead Import File) for just showcase functionality,

 

DAY1 Load:

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',10,'KG',1500,'INR','2016-04-01 09:10:59');

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',20,'KG',2500,'INR','2016-04-01 09:11:00');

INSERT INTO SALES_FLAT VALUES( 10001,10,'ZOR','MAT0002',10,'KG',4500,'INR','2016-04-01 09:12:15');

INSERT INTO SALES_FLAT VALUES( 10002,10,'ZOR','MAT0003',20,'KG',3500,'INR','2016-04-01 09:13:10');

INSERT INTO SALES_FLAT VALUES( 10003,10,'ZOR','MAT0004',10,'KG',1500,'INR','2016-04-01 09:13:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',10,'KG',1500,'INR','2016-04-01 09:14:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',40,'KG',8500,'INR','2016-04-01 09:15:59');

DAY1

we have order 10000, item 10 having multiple changes with different time stamp.

We have Order 10004, item 10 having multiple changes with different time stamp.

 

DAY2 Load

DAY1 + DAY2 ( Full load ): in this case we have not implemented any Delta logic.

DAY 2 Load records:

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',20,'KG',2500,'INR','2016-04-02 09:10:59');

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',30,'KG',3500,'INR','2016-04-02 09:11:00');

INSERT INTO SALES_FLAT VALUES( 10001,10,'ZOR','MAT0002',20,'KG',5500,'INR','2016-04-02 09:12:15');

INSERT INTO SALES_FLAT VALUES( 10002,10,'ZOR','MAT0003',30,'KG',6500,'INR','2016-04-02 09:13:10');

INSERT INTO SALES_FLAT VALUES( 10003,10,'ZOR','MAT0004',20,'KG',7500,'INR','2016-04-02 09:13:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',20,'KG',8500,'INR','2016-04-02 09:14:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',50,'KG',9500,'INR','2016-04-02 09:15:59');

DAY2 load Time stamp is April 2nd 2016.

In DAY2 load we have same Transactions with Different Time Stamp & Different Key Figures values.

So here Requirement to Report Only  the latest updated Changes from Table SALES_FALT.

 

 

RANK function by SQL logic:

SELECT SAELSORDER, SALESITEM, DOC_TYPE,MATERIAL_NUM, ORDER_QTY,UNIT, NET_VALUE, CURRENCY, CREATAED_AT,

    RANK() OVER(PARTITION BY SAELSORDER, SALESITEM ORDER BY CREATAED_AT DESC ) AS "RANK " FROM SALES_FLAT ORDER BY SAELSORDER, SALESITEM;

 

RANK Function.jpg

 

 

In above we can see one Extra Column as RANK, in this column orders got sorted based on Created At Timestamp & assigned RANK value.

These Values got Derived based on SQL RANK Functionality.

In Above code, we did RANK based on PARTITION BY on columns Sales Order number & Item and ORDER BY CREATAED_AT DESC.

 

RANK Logic by SQL without RANK Function:

SELECT SAELSORDER, SALESITEM, DOC_TYPE,MATERIAL_NUM, ORDER_QTY,UNIT, 

            NET_VALUE, CURRENCY, CREATAED_AT,

                   (select count(*) from SALES_FLAT T1 where T1.SAELSORDER = T2.SAELSORDER AND T1.SALESITEM = T2.SALESITEM AND T1.CREATAED_AT < T2.CREATAED_AT) +1 as RANK from SALES_FLAT T2 order by SAELSORDER, SALESITEM;

 

SQL Logic without RANK function.jpg

 

Both Output were same , With RANK functionality & without RANK Functionality.


RANK Functionality introduced in HANA SP8.


 

RANK function by Calculation View:

 

RANK Node using in Calculation View by Graphical

Graphical Calculation View in RANK Node, after Selecting Required Table, need to Set values to Required Parameters like Sort Direction, Order By & Partition BY.

 

We have below one Check box to Generate Extra Column in Our Calculation View, which holds RANK Values.

There is Threshold Parameter, it is the place where we can Fix Value or pass input parameter, which applies on Newly Generate Column.

It means if we pass as 1, then it will report All records which having RANK = 1.

RANK node Cal View.jpg

In Graphical Calculation View in RANK Node, after Selecting Required Table, need to Set values to Required Parameters like Sort Direction, Order By & Partition BY.

 

We have  one Check box in above screen shot, to Generate Extra Column in Our Calculation View, which holds RANK Values.

There is Threshold Parameter, it is the place where we can Fix Value or pass input parameter, which applies on Newly Generated Column.

It means if we pass as 1, then it will report All records which having RANK = 1.

 

 

Cal View output.jpg

by Passing Input parameter to Calculation View, we get only those Records which we required, this input parameters works on newly generated column RANK.

Passing Value 1, we get latest Timpstamp Transaction items from Calculation view, this is because in RANK node , SORT order of CREATED AT field is Descending Order.

Microsoft ODBC Driver for SQL Server on Linux - by the SAP HANA Academy

$
0
0

Introduction

 

At the SAP HANA Academy we are currently updating our tutorial videos about SAP HANA administration [SAP HANA Administration - YouTube].

 

One of the topics that we are working on is SAP HANA smart data access (SDA) [SAP HANA Smart Data Access - YouTube].

 

Configuring SDA involves the following activities

  1. Install an ODBC driver on the SAP HANA server
  2. Create an ODBC data source (for remote data sources that require an ODBC Driver Manager)
  3. Create a remote data source (using SQL or SAP HANA studio)
  4. Create virtual tables and use them in calculation views, etc.

 

As of SPS 11, the following remote data sources are supported:

 

 

  • Apache Hadoop (Simba Apache Hive ODBC)
  • Apache Spark

 

In the SAP HANA Administration Guide, prerequisites and procedures are documented for each supported data source, but the information is intended as a simple guide and you will need 'to consult the original driver documentation provided by the driver manufacturer for more detailed information'.

 

In this series of blogs, I will provide more detailed information about how perform activity 1. and 2,; that is, installing and configuring ODBC on the SAP HANA server.

 

The topic of this blog is the installation and configuration of the Microsoft ODBC driver for SQL Server on Linux.

 

 

Video Tutorial

 

In the video tutorial below, I will show you in less than 10 minutes how this can be done.

 

 

If you would like to have more detailed information, please read on.

 

 

Supported ODBC Driver Configurations

 

At the time of writing, there are two ODBC drivers for SQL Server available for the Linux (and Windows) platform: version 11 and 13 (Preview).

 

Microsoft ODBC driver for SQL Server on LinuxSQL ServerOS (64-bit)unixODBCSAP HANA Smart Data Access support
Version 13 (Preview)2016, 2014, 2012, 2008, 2005RHEL 7, SLES 122.3.1Not supported
Version 112014, 2012, 2008, 2005RHEL 5, 6; SLES 112.3.0SQL Server 2012

 

For SAP HANA smart data access, the only supported configuration is Microsoft ODBC Driver 11 in combination with SQL Server 2012. Supported means that this combination has been validated by SAP development. It does not mean that the other combinations do not work; they probably work just fine. However, if you run into trouble, you will be informed to switch to a supported configuration.

 

Information about supported configurations is normally provided in the SAP Product Availability Matrix on the SAP Support Portal, however, so far only ASE and IQ are listed. For the full list of supported remote data sources, see SAP Note 1868209 - SAP HANA Smart Data Access: Central Note.

 

 

unixODBC

 

On the Windows platform, the ODBC driver manager is bundled together with the operating system but on UNIX and Linux this is not the case so you will have to install one.

 

The unixODBC project is open source. Both SUSE Linux Enterprise Server (SLES) and Red HatEnterprise Linux (RHEL) provide a supported version of unixODBC bundled with the operating system (RPM package). However, Microsoft does not support these bundled unixODBC packages for the Microsoft ODBC Driver Version 11 so  you will need to compile release 2.3.0 from the source code. This is described below.

 

unixODBCRelease DateOS (64-bit)Microsoft ODBC Driver
2.3.4 (latest)08.2015
2.3.210.2013
2.3.111.2001RHEL 7, SLES 12Version 13 (Preview)
2.3.004.2010Version 11
2.2.1411.2008RHEL 6
2.2.1210.2006SLES 11

 

 

System Requirements

 

First, you will need to validate that certain OS packages are installed and if not, install them (System Requirements).

 

This concerns packages like the GNU C Library (glibc), GNU Standard C++ library (libstdc++), the GNU Compiler Collection (GCC) to name a few, without which you will not get very far compiling software. Also, as the Microsoft ODBC Driver supports integrated security, Kerberos and OpenSSL libraries are required.

 

 

Installing the Driver Manager

 

Next, you will need to download and build the source for the unixODBC driver manager (Installing the Driver Manager).

 

  1. Connect as root
  2. Download and extract the Microsoft driver
  3. Run the script build_dm.sh to download, extract, build, and install the unixODBC Driver Manager

 

script.png

 

The build script performs the installation with the following configuration:

# export CPPFLAGS="-DSIZEOF_LONG_INT=8"

# ./configure --prefix=/usr --libdir=/usr/lib64 --sysconfdir=/etc --enable-gui=no --enable-drivers=no --enable-iconv --with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE"

# make

# make install

 

Note the PREFIX, LIBDIR and SYSCONFDIR directives. This will put the unixODBC driver manager executables (odbcinst, isql), the shared object driver files, and the system configuration files (odbcinst.ini and odbc.ini for system data sources) all in standard locations. With this configuration, there is no need to set the environment variables PATH, LD_LIBRARY_PATH and ODBCINSTINI for the login shell.

 

 

Installing the Microsoft ODBC Driver

 

Next, we can install the ODBC driver [Installing the Microsoft ODBC Driver 11 for SQL Server on Linux].

 

Take a look again at the output of the build_dm.sh (print screen above). Note the passage:

 

PLEASE NOTE THAT THIS WILL POTENTIALLY INSTALL THE NEW DRIVER MANAGER OVER ANY

EXISTING UNIXODBC DRIVER MANAGER.  IF YOU HAVE ANOTHER COPY OF UNIXODBC INSTALLED,

THIS MAY POTENTIALLY OVERWRITE THAT COPY.

 

For this reason, you might want to make a backup of the driver configuration file (odbcinst.ini) before you run the installation script.

 

  1. Make a backup of odbcinst.ini
  2. Run install.sh --install

 

install.png

 

 

The script will register the Microsoft driver with the unixODBC driver manager. You can verify this with the odbcinst utility:

odbcinst -q -d -n "ODBC Driver 11 for SQL Server"

 

Should the install have overwritten any any previous configuration, you either need to register the drivers with the driver manager again or, and this might be easier,  restore the odbcinst.ini file and manually add the Microsoft driver.

 

For this, create a template file (for example, mssql.odbcinst.ini.template) with the following lines:

 

 

[ODBC Driver 11 for SQL Server]

Description=Microsoft ODBC Driver 11 for SQL Server

Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2270.0

Threading=1

 

Then register the driver with the driver manager using the command:

odbcinst -i -d -f mssql.odbcinst.ini.template

 

 

Create the data source and test the connection

 

Finally, we can register a data source with the driver manager. For this, create a template file and save it as mssql.odbc.ini.template.

 

You can give the data source any name. Here MSSQLTest is used, but for production systems, using the database name might be more sensible (spaces are allowed for the data source name).

 

Driver = name of the driver in odbcinst.ini or the full path to driver file

Description = optional

Server = host (FQDN); protocol and port are optional, if omitted tcp and 1433 will be used.

Database = database name (defaults to Master)

 

[MSSQLTest]

Driver = ODBC Driver 11 for SQL Server

Description = SQL Server 2012 test instance

; Server = [protocol:]server[,port]

; Server = tcp:mo-9e919a5cc.mo.sap.corp,1433

Server = mo-9e919a5cc.mo.sap.corp

Database = AdventureWorks2012

 

Register the DSN with the driver manager as System DSN using the odbcinst utility:

odbcinst -i -s -l -f mssql.odbc.ini.template

 

Verify:

odbcinst -q -s -l -n "MSSQLTest"

 

Test connection:

isql -v "MSSQLTest" <username> <password>

 

The -v [erbose] flag can useful in case the connection fails, as it will tell you, for example, that your password is incorrect. For more troubleshooting, see below.

 

 

System or User Data Source

 

It is up to you, of course, whether to register the data source as a system data source or a user data source. As the SAP HANA server typically is a dedicated database system, using only system data sources has two advantages:

 

  1. Single location of data source definitions
  2. Persistence for system updates

 

With the data sources defined in a single location, debugging connectivity issues is simplified, particularly when multiple drivers are used.

 

With the data sources defined outside of the SAP HANA installation directory, you avoid that your odbc.ini will be removed when you uninstall or update your system.

 

To register the DSN with the driver manager as User DSN using the odbcinst utility, connect with your user account and execute:

odbcinst -i -s -h -f mssql.odbc.ini.template

 

The difference is the  -h (home) flag and not  - l (local).

 

Verify:

odbcinst -q -s -h -n "MSSQLTest"

 

Test connection (same as when connecting to a system data source):

isql -v "MSSQLTest" <username> <password>

 

user.png

 

Note that when no user data source is defined, odbcinst will return a SQLGetPrivateProfileString message.

 

 

Troubleshooting

 

Before you test your connection, it is always a good idea to validate the input.

 

For the driver, use the "ls" command to verify that the path to the driver is correct.

 

odbcini.png

 

For the data source, use the "ping" command to verify that the server is up and use "telnet" to verify that the port can be reached (1433 for SQL Server is the default but other ports may have been configured; check with the database administrator).

 

ini.png

If you misspell the data source name, the [unixODBC] [Driver Manager] will inform you that the


Data source name not found, and no default driver specified

 

If you make mistakes with the user name or password, the driver manager will not complain but the isql tool will forward the message of the database server.

 

isql.png

 

If the database server cannot be reached, for example, because it is not running, or because the port is blocked, isql will also inform you by forwarding the message from the database server. Note that the message will depend on the database server used. The information we get back from SQL Server is much more user-friendly then DB2, for example.

 

connect.png

 

If the driver manager cannot find the driver file, it will return a 'file not found' message. There could be a mistake in the path to driver file.

 

notfound.png

 

 

More Information

 

SAP HANA Academy Playlists (YouTube)

 

SAP HANA Administration - YouTube

SAP HANA Smart Data Access - YouTube.

 

Product documentation

 

SAP HANA Smart Data Access - SAP HANA Administration Guide - SAP Library

 

SAP Notes

 

1868209 - SAP HANA Smart Data Access: Central Note

 

SCN Blogs

 

SDA Setup for SQLServer 12

SAP HANA Smart Data Access(1): A brief introduction to SDA

Smart Data Access - Basic Setup and Known Issues

Connecting SAP HANA 1.0 to MS SQL Server 2012 for Data Provisioning

SAP Hana Multi-Tenant DB with Replication

 

Microsoft Developer Network (MSDN)

 

Microsoft ODBC Driver for SQL Server on Linux

Download ODBC Driver for SQL Server

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

SAP HANA April/May Webinar Series: Best Practices, Case Studies & Lessons Learned

$
0
0


The SAP HANA Webinar Series helps you gain insights
and deliver solutions to your organization.


This acclaimed series is sponsored by the SAP HANA international Focus Group (iFG) for customers, partners, and experts to support SAP HANA implementations and adoption initiatives. Learn about upcoming sessions and download Meeting Invitations here.


>>>Check outour new SAP HANA blog post about the following April & May webinars:


SAP HANA international Focus Group (iFG) Sessions:

  • April 14 – What’s New in SAP HANA Vora 1.2
  • April 21 – Introduction to OLAP Modeling in SAP HANA VORA
  • April 28 – Falkonry; Intelligent Monitoring of IoT Conditions
  • May 5 – Preview of SAP HANA @ SAPPHIRE NOW and ASUG Annual Conferences


SAP HANA Customer Spotlights:

  • April 19 – National Hockey League (NHL) Enables Digital Transformation with SAP HANA Platform:Register >>
  • April 26 – CenterPoint Energy – Analyzing Big Data, Faster with Reduced Storage costs:Register >>


New to the iFG community?


iFG Webinar Series 4.jpg

Viewing all 927 articles
Browse latest View live