Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

Introducing the very first web to HANA extractor using import.io

$
0
0

Recently I wrote a blog how to use import.io to extract data from any web page and put it into Lumira. This in itself opens up a ton of possibilities, just think about all the information you come across in day to life, but have no way of analyzing due to it being closed up in a web site. Well, not anymore, import.io can help you extract any data you like, using a user friendly interface.

 

But sometimes, loading data in Lumira is not enough. Let’s say you want to load that extracted data into HANA, in a single step, for further processing. In that case you would need an extractor connecting directly to HANA. Well my dear friends, that’s exactly what I build.


From a flat file to Lumira to a fully automated extractor

 

I used the previous example I wrote about in my blog so that you can easily follow what I did before, but trimmed the results a bit to have an easy to follow example.

 

I edited my extractor and removed all unnecessary columns. The end result looks like this, three simple columns:

 

New Picture.png

 

Please note that the definition of the columns I used is “Text” in order not to get extra meta data in my service, which I don’t need. Import.io is smart and will give extra information like currency and other source data, but for my example I don’t need this. Defining columns as text makes sure that doesn’t happen.

 
Creating a basic extraction script

 

First thing to do is open up the import.io browser again and press the integrate button in the “My Data” page:

 

New Picture (1).png

 

This will bring you to a page with integration options for import.io. For my extractor I will use my trusty companion: Python.

 

Now the cool thing is that import.io actually already creates a python script based on the data sources you just created! Not a single line of extra code is required to get the data already in JSON format.

 

Be sure to follow the steps as mentioned on the page:

New Picture (2).png

 

These client libraries are required to be able to execute the Python script. Next to this you also need the “requests” library to push the records into HANA so download that and install it aswell following the instructions here:

 

http://docs.python-requests.org/en/latest/user/install/

 

Next step is to download the example script (be sure to enter your password at step 2 to automatically have you api key filled!);


New Picture (6).png


Modifications needed to have the script post the records to HANA

 

First off, we need a table in HANA. Create one in the SAP HANA Development view in HANA Studio

 

File name “wsop2.hdbtable”:

 

table.schemaName = "PHILIPS";

table.tableType = COLUMNSTORE;

table.columns = [

{name = "name"; sqlType = NVARCHAR; length = 100;},

{name = "bracelets"; sqlType = INTEGER;},

{name = "rings"; sqlType = INTEGER;}

];

table.primaryKey.pkcolumns = ["name"];

 

Ofcourse use you own schema!

 

Define your service:

 

File name “wsop2.xsodata”

 

service namespace "wsop2" {

"PHILIPS"."wsop::wsop2" as "WSOP2";

}

 

These steps make sure that you have a table and a service to post your records to.

 

Now open your Python script you downloaded in the previous step and add the following lines of codes at the bottom:


# Now lets push to HANA

print "Pushing to HANA"

url = 'Your XS Service URL'

headers = {"Content-type": 'application/json;charset=utf-8'}

auth = YOUR_USER, 'YOUR_PW'

for row in dataRows:

  r = requests.post(url, data=json.dumps(row), headers=headers, auth=auth)

  if r.status_code == 201:

   print "Record successfully created in HANA"

  else:

   print "We seem to have a duplicate record in HANA!"

 

Ofcourse enter your own URL you defined when defining the service and enter the user id and password to your HANA system!


That’s all folks!

 

Really, that’s all folks! With just a couple of simple steps you can extract any data you find on the web and push it into HANA. Don’t know about you, but I am really excited about the possibilities this brings. Gather a ton of information from the web and analyse it to no end in HANA!

 

To clarify the end result, a small clip:

 

 

 

 

Thank you for reading this and take care,

 

Ronald.


How to determine the sizing for BW7.4 on HANA solution if the source is BW2.1c?

$
0
0

One of the challenges I faced when working with an older release of BW was that there were no known cases of sizing methodology which would give enough confidence to determine the sizing for a target BW7.4 on HANA solution based on a source BW2.1c system.

 

I want to share with you how I came to determine the most optimal way of determining the size of the target BW7.4 on HANA solution. However, it will only take you as far as giving the size of the initial BW7.4 on HANA database. You would then need to apply additional design dependencies e.g. Architected Marts, Growth of Data, Scalability, Resilience etc.

 

Size does matter

In the beginning of any project when designing the solution, sizing becomes a key discovery element and an input to the Solution Architecture. At this stage, I had already determined that the BW7.4 solution will be comprised of ABAP and Java, with BEx Analyser and BW Portal being used as only path for User Experience.

 

I used a combination of SAP QuickSizer, Published SAP Sizing Whitepapers, SAP OSS Notes, looking at the existing size of the BW2.1c estate, users access approach and how it was apportioned. The results of this process are summarized below. 

 

The current level of information was not sufficient to allow a user based sizing. Therefore, a combination of the following approach was used to reach an overall sizing estimate:

 

  • Current BW2.1c size
  • Compression ratio
  • SAP Quicksizer Tool
    • Users and their access approach
    • The to-be approach to implement SAP Best Practise for SAP BW on HANA

 

Current BW2.1c size

I applied SAP Note 1637145 to the BW2.1c system, the note provided a sizing script which needed to applied at the database level. The output gave a rough estimate only and many HANA specific features (e.g. non-active data) could not be reflected by these scripts.

 

As per the note there are 2 SQLScripts which need to be run on the database:

  1. 1.      get_ora_size.sh or get_ora_win_size.sql
  2. 2.      load_RowStore_List.sql

One the scripts were executed it produced the following results:

Figure1.png

Figure 1: ColumnStore size for BW2.1c

According to the SAP Note and as shown in Figure 1, the result provided a rough estimation of the ColumnStore Table, around 1.1TB.

Compression Ratio

Taking the results I then began to apply the appropriate net compression ratio the SAP Best Practice memory sizing is based on an average compression rate between 4 and 5. Given that the above result gave an estimated ColumnStore size of 1.1TB. I applied a more conservative compression ratio of 1:4 it provided the an approximate size for BW on HANA of 275GB.

SAP Quicksizer Tool

I now wanted to verify that 275GB was a good enough estimate, with that in mind, I used additional set of assumptions based on the information I had at the time, these included:

  • User volumes, User Concurrency, User Type
  • Data Retention/ Dataload
  • Transaction volumes by date
  • DataStore Objects across functional areas

 

The result produced could only be considered an estimate since the assumptions made at the time would have to be further refined.

Figure2.png

Figure 2: SAPS sizing output for SAP BW on HANA & SAP BI Java

Figure3.png

Figure 3: Memory and Disk sizing output for SAP BW on HANA & SAP BI Java

 

 

Conclusion

I was a little surprised at how close the compression ratio sizing metrics and the Quicksizer result were, as shown in Figure 2 and 3. So we already know the current size of the BW2.1c system is approximately 1.1TB. By applying a conservative SAP Best Practice estimate on compression of a factor of 1:4 it gave a net compressed size of 275GB. Furthermore, when I compared that to the output from the Quicksizer it gave an estimated net compressed size of 278GB!

 

Is this a coincidence or a good verified estimate?

SPS 8: Memory Usage of Expensive Statements

$
0
0

An interesting new feature of SPS8 is now the ability to easily see the temporary memory used by a long running statement.

 

To activate this feature you need to switch on 'enable_tracking' and 'memory_tracking' in the global.ini file:

 

 

Then activate the 'Expensive Statement Trace' as normal:


Now execute your long running query.


Unfortunately In the standard Expensive Statements Trace Log I couldn't see the new field 'MEMORY_SIZE'


Fortunately it can be seen when you run a SQL on the M_EXPENSIVE_STATEMENTS directly:

select "STATEMENT_STRING",

          round(MEMORY_SIZE/1024/1024/1024,2) as "Memory Size Gb",

          round(DURATION_MICROSEC/1000000,2) as "Seconds", *

from  M_EXPENSIVE_STATEMENTS

where "OPERATION" = 'AGGREGATED_EXECUTION'

order by statement_start_time

limit 10;



I used it to compare a heavy Join (10 Million reference document line items  to 10 Million line items) in separate tables with  3 similar queries.

Option:

1) Calculation View (CA_PERF_GRA_001)

2) Calculation View [SQL Engine]  (CA_PERF_GRA_002)

3) Direct SQL Statement

trace options.jpg


Option 1)   performed the worst both in terms of Speed and Temporary memory usage.

My example data set is available to create on an earlier blog JOIN's in an Imperfect World

Option 1 & 2)   were defined as:

The only difference between the two is the 'Execute In'   setting in the properties. 

CA_PERF_GRA_001  is set as the DEFAULT <BLANK>

CA_PERF_GRA_002  is set as 'SQL Engine'


Option 3 was the SQL:

select SUM("XVAL"),

       SUM("YVAL"),

       count(*)

from "PERFORMANCE"."PERF_X" as x

LEFT OUTER JOIN "PERFORMANCE"."PERF_Y" as y

    ON  x."REFDOC"   = y."YDOC"

    and x."REFDOCLN" = y."YDOCLN"

LEFT OUTER JOIN "PERFORMANCE"."PERF_Z" as z

    ON y."ZKEY" = z."ZKEY";

 

 

So in this simple scenario the joint winners with

Duration ~3.2 seconds

Temporary Memory 0.58 Gb

were:

2) Calculation View [SQL Engine]  (CA_PERF_GRA_002)

3) Direct SQL Statement



In a distance last place was the default calculation view with

Duration ~15 seconds

Temporary Memory 3.5 Gb

1) Calculation View (CA_PERF_GRA_001)



 

Test Out Your  Queries today

HANA Project Experience

$
0
0

Hi All,

Just want to share a quick thing that we have recently faced on our HANA Production,

 

HANA Master node was frozen  and doesn't accept any new incoming connections.

 

 

After SAP's reply for the message, we have disabled transparent huge pages on all HANA Linux hosts.

 

Disable transparent hugepages on all nodes of your HANA system

according to SAP note 1824819


Thanks

Srikanth M

Hands-on video tutorials introducing what's new in predictive with SPS08

$
0
0

Hi,

 

You likely already heard that SAP HANA SPS08 is generally available?

 

The SAP HANA Academy produce hands-on video tutorials covering all aspects of SAP HANA and of course we’ve got SPS08 fully covered!

 

Here's the main playlist: SAP HANA SPS 08 - What's New

 

We've published a tutorial for each and every new SAP HANA predictive analysis library (PAL) algorithm as well as covering key enhancements to existing algorithms - so if you want to see how we enable predictive maintenance, distribution fitting, K-medoids clustering, ARIMA forecasting, FP-growth association analysis, CART decision tree, random number generation, or simply how to cancel long running algorithms – now you can!

 

Here are direct links to all of the new PAL videos currently published that feature SPS08:

 

Getting Started

PAL: 53. Getting Started with SPS08

 

Clustering

PAL: 54. Clustering - Kmeans Best K

PAL: 55. Clustering - Kmedoids

 

Time Series

PAL: 56. Time Series - ARIMA Model

PAL: 57. Time Series - ARIMA Predict

PAL: 58. Time Series - Forecast Smoothing Model Selection

 

Association Analysis

PAL: 59. Association - Apriori New Parameters

PAL: 60. Association - FP-growth

 

Classification

PAL: 61. Classification - Decision Tree CART Model

PAL: 62. Classification - Decision Tree CART Predict

PAL: 63. Classification - Logistic Regression Model Cancel

PAL: 64. Classification - Multinomial Logistic Regression Model

PAL: 65. Classification - Multinomial Logistic Regression Predict

 

Statistics

PAL: 66. Statistics - Univariate New Statistics

PAL: 67. Data Preparation - Random Distribution

PAL: 68. Statistics - Distribution Fit

PAL: 69. Statistics - Distribution Fit Censored

PAL: 70. Statistics - Distribution Probability

PAL: 71. Statistics - Distribution Quantile

 

Each tutorial is accompanied by the SQL script shown and you can also download the example data in order to try the algorithms out for yourself. You'll find the github link in the playlist description or go there directly via: saphanaacademy/PAL


Here's the full academy playlist for the PAL: Predictive Analysis Library

 

As always feedback is most welcome – in the YouTube comments section, below, tweet me @pmugglestone or mailto:HanaAcademy@sap.com.

 

Philip

SAP HANA Answers

$
0
0
A knowledge hub for SAP HANA..
SAP HANA Answers allows you to search, post and answer SAP HANA related topics.
Please click on this link SAP HANA Answers to access the application.
In the application there are 2 tabs: Search and Ask
In the Search box we can enter any search term and click on Search. It will display the search results from different sources like SAP HANA Academy, SCN, SAPHANA.com, SAP Help and so on..
hanaAnswers1.png
We can also Ask questions from the application but before that we need to login.
In the Ask tab, first we can check whether our question is already asked in any of the sources and if not we can ask the question and post it to the community.

 

 

We can also install this in SAP HANA Studio by following the below steps:

 

In SAP HANA Studio:

    1. Click on Help->Install New Software
    2. Add a site and complete the wizard

That's it..

Now in SAP HANA Studio, we can highlight some text and press F10 key or click the SAP HANA Answers button in the toolbar.

This gives us the information about the highlighted content.

 

 

 

 


SAP River : Getting started with SAP HANA SPS08

$
0
0

SAP River was first made available via an early adoption program with SAP HANA SPS7 and attracted a huge amount of interest.

 

River remains in the early adoption program for SAP HANA SPS08 but has undergone a number of important changes and enhancements in the meantime. These include:

  • Simplification of the installation and configuration process
  • Introduction of the HANA Studio and web-based "Application Explorer"

 

So to help you get to up speed with River on SPS08, the SAP HANA Academy has produced 5 new tutorials commencing with: River SPS08: 23. Getting started

 

The others are in the same playlist (#24-27) or use the direct links below:

River SPS08: 24. Enabling the Development Environment

River SPS08: 25. Creating the Development Environment

River SPS08: 26. Hello World

River SPS08: 27. Hello World OData

 

Tutorials are accompanied with code snippets where relevant and you download them via the github link in the playlist description or directly via: saphanaacademy/River


Here's the full academy playlist for River: SAP River

 

As always feedback is most welcome – in the YouTube comments section, below, tweet me @pmugglestone or mailto:HanaAcademy@sap.com.


How to install HANA Studio on Mac OSX using an Update Site

$
0
0

1. Download

Download Eclipse 4.3.2 64-bit for Mac Cocoa from http://eclipse.org/downloads/

Eclipse Download Site.png

 

For this I used the following link, but many other mirrors are available.

http://ftp.wh2.tu-dresden.de/pub/mirrors/eclipse/technology/epp/downloads/release/kepler/SR2/eclipse-standard-kepler-SR2-macosx-cocoa-x86_64.tar.gz

 

2. Extract

Extract the .tar.gz and move into an appropriate location such as inside the Mac Applications folder.

 

3. Start Eclipse

3.1 You may receive a pop-up becuase it is has been downloaded from an unidentified developer. If so you should go to System Preferences, Security & Privacy, and click on button Open anyway.

 

Download from the Internet.png

 

3.2 You may also receive a pop to allow incoming connects. Choose Allow if you receive this.

 

Incoming Connections.png

 

4. Configure Eclipse

From the the Eclipse Menu choose

Eclipse, Preferences

In the Search box type "Update"

Eclipse Preferences.png

5. Add HANA Update Site

You can now add your HANA Update Site, now I know you may not know what this is.  It follows the format http://<host>:80<instance_number>

I haven't configured my own update site but I do know that you can also use the HANA OnDemand Tools site which currently contains Hana Studio Revision 81

 

Choose Add and then one of the public URLs is

https://tools.hana.ondemand.com/kepler

 

While your here you can also add the HANA Answers plugin too if you like using the URL https://answers.saphana.com/updates

 

6. Configure Proxy (optional)

If like me you work at SAP and are on the SAP-Corporate network then to connect to this site you will need to configure the proxy.

This is also an Eclipse Preference. You will need to specify it as Manual and the the host is proxy.wdf.sap.corp and the port is 8080.

 

Beware if you add I proxy you will probably need to restart Eclipse for the change to take effect.

 

Eclipse Proxy.png

 

6. Install SAP HANA Tools Software

You can now install the HANA Studio addinfor Eclipse from the update site that you have just configured.

Choose >> Help, Install Software

 

Screen Shot 2014-06-20 at 13.44.32.png

 

Screen Shot 2014-06-20 at 13.47.50.png

 

Enter the name of the Update Site that you have just configured in the search box such as HANA and then it will autocomplete for you.

You can select the appropriate tools to install.

 

Eclipse .png

You will need to accept the license agreement and choose next.

 

During installation you will receive a security warning, you need to choose OK to this

 

Screen Shot 2014-06-20 at 13.52.46.png

 

7. Restart Eclipse

You will then have to restart Eclipse for the HANA Tools to be available.

 

On restart you may receive a warning to allow incoming connections.  Choose Allow.

Incoming Connections.png

 

You should now have HANA Studio for Mac installed.

Open the perspective Modeler and you should now be in a familiar place in HANA Studio.

 

From SP8 Onwards Mac is now a supported Platform

 

For official information related to HANA SP8 and the Mac version you can see the  SAP Note 2004651

http://service.sap.com/sap/support/notes/2004651

 

Let me know how you get on with HANA Studio on Mac.

 

Cheers, Ian.


Reinventing the Data Warehouse with In-Memory Data Fabric from SAP

$
0
0

In March we introduced the In-Memory Fabric, SAP’s new enterprise data warehousing architecture.  The In-Memory Data Fabric, is based on SAP HANA, and takes our customers to the next Data Warehousing level – offering a modular solution with pre-integrated technologies optimized for different logical data requirements.

 

Join us for the TDWI Summer Series that discusses today’s data warehousing challenges and provides a clear understanding of the value of our In-Memory Data Fabric architecture; highlighting the numerous benefits. Register Today!

 

Date: June 25, 2014

In-Memory Fabric: A Modern Approach to Data Warehouse Architecture

 

Business reasons for a faster, more lean, and more virtual data warehouse 

 

Speaker: Philip Russom. TDWI Research Director

 

 

Date: July 24, 2014

Modernize Data Warehousing with Hadoop, Data Virtualization, and In-Memory Techniques

 

Business reasons for a faster, more lean, and more virtual data warehouse 

 

Speaker: Philip Russom, TDWI Research Director

 

Date: August 21, 2014

Stream Processing: Streaming Data in Real-time In-Memory

 

The implications of advanced analytics and in-memory computing

on business decision making and the organization

 

Speaker: Fern Halper, TDWI Research Director

 

Date: August 27, 2014

Achieving Faster and More Agile BI and Analytics with Virtual Data Processing

 

The key elements of emerging virtual or federated data architectures,

and how traditional data warehouse architectures can make the transition

 

Speaker: David Stodder, TDWI Research Director

 

Date: September 11, 2014

Architecture Matters Real-time In-memory Technologies Do Not Make Data Warehousing Obsolete

 

What the new architectural fabric looks like—the extended data warehouse (XDW),

and how it is impacting and improving today’s data warehouse architectures

 

Speaker: Claudia Imhoff, President of Intelligent Solutions

Extending HANA Live Views

$
0
0

Hi Everyone,

 

 

In this blog I will describe how we can extend HANA Live Views

 

I hope many of you are familiar with HANA Live.

If you are not familiar with HANA Live, then you can refer the below blog:

SAP HANA Live - Real-Time operational reporting | SAP HANA

 

To extend HANA Live Views we generally make a copy of it and then make changes to it as per our needs.

You can check the document given in the blog mentioned above on how to extend HANA Live Views.

 

Now SAP has created a new tool called SAP HANA Live Extension Assistant.

Using this tool we can easily extend Reuse Views and Query Views.

 

Lets start with the Installation of Extension Tool:

First download HANA Content Tools from Service Marketplace and then import Delivery Unit HCOHBATEXTN.tgz

001.jpg

 

Once this Delivery Unit is installed, we can see the extn folder as shown in the below screenshot.

002.jpg

 

It also generates a role sap.hba.tools.extn.roles::ExtensibilityDeveloper and we will need to assign this role to the User to work with this tool.

 

Now to complete the installation, Go to Help -> Install New Software and click on Add button and enter the details as shown below:

The path would be http://hostname:80[instancenumber]/sap/hba/tools/extn

003.jpg

On clicking OK, we will see the Extension tool and then we need to install it

004.jpg

 

Once this tool is installed, and we R-Click on any View, we can see the option Extend View.

This option is enabled only for HANA Live Reuse and Query Views and disabled for Private Views as shown below.

006.jpg

We can easily identify Query Views from HANA Studio as they end with Query but we can't distinguish between Reuse and Private Views.

So to know which View is a Reuse View and which View is a Private View, logon to HANA Live Browser and check there as shown below.

005.jpg

Lets say that we want to make changes in the GLAccountInChartOfAccountsReuse View, so we will R-Click on that view and select Extend View and then we will be greeted with the below screen

021.jpg

009.jpg

It will create a new View with the same name as the Reuse View

Here we will select the package where we want Extended View to be created.

All the fields that are present in the Reuse View are marked as Grey and we cannot change them

On the left side, we get list of all the tables that are used in the View.

These tables only show those fields that are not used in the View

So we can select any field from the table and add it, lets say we want to select SAKAN field from the table, so we will click on SAKAN field and then Click on the + button on right side of the screen, this will add it to our view

010.jpg

 

Then we will select Next... by default the Join type is Left Outer Join and we cannot change it but we can change the Cardinality of the View

011.jpg

On the right side, it shows fields for Join. As the table SKA1 is already used in this View, so it proposes fields for Join, we can either use these fields or add our own fields by selecting + button as shown below

012.jpg

 

Then we will Validate and Activate the View

 

The Extended View copies the semantic properties of the Reuse View - if Semantic Node in Reuse View is Projection then in Extended View also the Semantic Node will be Projection

 

Below is the newly created view.

014.JPG

 

We can also observe one more thing that after installing Extension tool, the Join Node and other Nodes are shown in a more elegant and colorful way

 

Now lets extend GLAccountInChartOfAccountsQuery  Query View

R-Click on the View and select Extend View, then we will see the below screen

015.jpg

By default it takes the package in which we extended Reuse View earlier

It shows us those columns which are present in Query View but are not selected in Output

 

We can also select our extended Reuse View and then its additional fields are also available to be added to the output as shown below

017.jpg

Lets select both the fields and Validate and Activate the View

 

Both the Views are available inside ExtTest Package as shown below

019.jpg

Now Lets check out the Extended Query View  and now we can see that both our selected fields are present in the View

020.jpg

The tool has both its benefits and limitations:

 

Benefits:

 

It is a simple tool.

It is easier to extend an existing Query or Reuse View if we just want to add additional Columns from the underlying Tables the View is already using

If we update HANA Live Views later on then out Extended View also get updated

 

Limitations:

 

It has many limitations at present

 

We cannot extend Query Views with Unions and we can also not directly extend Query Views in which Aggregation Node is present at levels other than Top Node(Node before Semantic Node).

At present we can not add fields from other Tables(Tables that are not used in a View) to an existing Reuse View, but hopefully this option will be available in next version

We cannot create Calculated Columns or change Filter options

 

Hopefully, this tool will get better with time

 

Regards,

Vivek

Phases behind DMO R3load parallel export/import during UPTIME and DOWNTIME to target HANA DB

$
0
0

After completed several DMO projects (BW, CRM and ERP), in order to refresh and a good way retaining memory, is to write and share knowledge gained on DMO parallel export/import phase, by observing and studying its behavior via logs, migration and command files in each and every project done apart from those good sources from sap guide and notes, great blogs and doc in SCN.

 

In this blog, will merely focusing on how DMO works its magic on migration upgraded/updated shadow repository during uptime and application data during downtime to target HANA DB as depicted in picture A and B, step 2a and 2b (highlighted in red)

 

Picture A: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)

 

Picture B: (Source credit to @Roland Kramer and @Boris Rubarth, Thanks!)

 

Picture C: Uptime migration to target, HANA Database (extracted from upgrade analysis xml upon SUM-DMO completion)

 

Picture C shown DMO specific phases behind uptime migration during Preprocessing/ Shadow Import. Will talk about the phases highlighted in red, such as how shadow repository and its objects are created and move to HANA Database, how DMO knows which R3load, kernel and binaries to use whilst there are 2 different databases (in our case, source = oracle and target = HDB).

 

From the above, we know that upgraded/updated shadow repo created in source is ready to move to HANA Database, where clone size will be calculated in 2 groups:  UT and DT and based on the object in PUTTB_SHD tables.

 

UT = system table (eg: DD*, REPO* and etc.)

DT = data table

PUTTB_SHD = control tables for shadow import during upgrade, tables that needed to copy and import to shadow

 

Example syntax for (variables are vary from phase to phase)

 

EU_CLONE_UT_SIZES

Selecting from 'PUTTB_SHD' with condition '( ( ( CLONE == "U" or CLONE == "B") and ( SRCTYPE == "J" or SRCTYPE

== "T" or SRCTYPE == "P" or SRCTYPE == "C" ) and SRCFORM == "T" ) or ( ( CLONE == "S" or CLONE == "C" or CLONE == "F"

or CLONE == "G" or CLONE == "T" or ( CLONE == "U" and FLAGNEW == "X") ) and ( DSTTYPE == "J" or DSTTYPE == "T" or DSTTY

PE == "P" or DSTTYPE == "C" ) and DSTFORM == "T" ) )'.

 

EU_CLONE_DT_SIZES:

Selecting from 'PUTTB_SHD' with condition '( ( CLONE == "D" or CLONE == "C" or CLONE == "G" or CLONE == "T" )

and ( SRCTYPE == "J" or SRCTYPE == "T" or SRCTYPE == "P" or SRCTYPE == "C" ) and SRCFORM == "T" )'.


Directory migrate_ut& migrate_dt will be created by phase EU_CLONE_MIG_UT_PRP and EU_CLONE_MIG_DT_PRP subsequently in /SUM/abap/

 

Both migrate_ut  & migrate_dt directory contains .CMD, .STR and other files generated by R3ldctl. .TSK files will generated by R3load during export/import with migration result for each table (EXP = Export files ; IMP = IMPORT files)

 

EU_CLONE_MIG_*T_PRP : Prepare the tables COUNT (*); split tables with certain threshold, produce list of shadow tables and views to be imported, and other details information into the bucket file - MIGRATE_UT.BUC

 

EU_CLONE_MIG_*T_CREATE: R3load (HANA) run to create table structure in HANA.

 

How to verify?

 

There are just MIGRATE_UT_CREATE_*_IMP.TSK but no *_EXP.TSK exist in SUM/abap/migrate_ut_create and SUM/abap/migrate_dt_create. You’ll see object type (T) and action (C) in .TSK files.

Example: Random check on several .TSK file return with object type Table and action Create (in bold)

 

UT

T SXMS_SEQUENCE C ok

T SXMS_EO_RETRY_ER C ok

T SXMS_CUST_HDR C ok

T SXMSPLSRV C ok

T SXMSCONFDF C ok

T SXI_LINK C ok

 

DT

T ARCH_IDX_S C ok

T CRMC_ICSS_REG C ok

T CRMD_DHR_HSLSQUO C ok

T PAT13 C ok

T ARCH_OCLAS C ok

T CRMC_ICSS_IO_ATR C ok


Further explain the syntax in .TSK file:

 

EU_CLONE_MIG_UT_RUN (UPTIME): Entries of *UT* group tables are exported from shadow repo and imported to HANA in parallel. R3load pairs are doing the export and import. The first R3 load (part of the shadow kernel) is exporting the data, the second R3load (part of the target kernel) is importing the data into SAP HANA DB.

Both R3loads are running in parallel on the same host. No export files (dump files) are created because the data transfer between the R3load pair happens through the main memory of the host. This R3load option is called memory pipes (currently only for non-windows hosts).


To understand more, refer to 2 great blogs shared by Boris Rubarth DMO: technical background and DMO: comparing pipe and file mode for R3load

 

This is proven in MIGRATE_UT_*_EXP.CMD and MIGRATE_UT_*_IMP.CMD file as you can see, ‘PIPE’ is used:


Example:

tsk: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.TSK"

icf: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_EXP.STR"

dcf: "/usr/sap/SID/SUM/abap/migrate_ut/DDLORA_LRG.TPL"

dat: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE"

 

tsk: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.TSK"

icf: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042_IMP.STR"

dcf: "/usr/sap/SID/SUM/abap/migrate_ut/DDLHDB_LRG.TPL"

dat: "/usr/sap/SID/SUM/abap/migrate_ut/MIGRATE_UT_00042.PIPE"


Also, you can see the updated time for export and import .TSK is identical or close to each other.

Mar 22 11:16 MIGRATE_UT_00010_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00010_EXP.TSK

Mar 22 11:16 MIGRATE_UT_00001_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00008_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00008_EXP.TSK

Mar 22 11:16 MIGRATE_UT_00009_IMP.TSK

Mar 22 11:16 MIGRATE_UT_00009_EXP.TSK

Mar 22 11:17 MIGRATE_UT_00014_IMP.TSK

Mar 22 11:17 MIGRATE_UT_00014_EXP.TSK

 

By the way, how SUM-DMO knows which R3loads/binaries to use since there’s shadow kernel and target HANA Kernel?

 

DMO distinguish them with source DB (Shadow Kernel) extracted to SUM/abap/exe whilst Target HANA Kernel to SUM/abap/exe_2nd/ during configuration phase.


Result end of SUM Configuration Phase:

R3load_25-10012508.SAR PATCH    UNPACK_EXE        OK           SAP kernel patch: R3load ,Release: 741

R3load_25-10012508.SAR PATCH    UNPACK_EXE2ND                 OK           SAP kernel patch: R3load ,Release: 741

dw_25-10012457.sar PATCH    UNPACK_EXE OK           SAP kernel patch: disp+work ,Release: 741

dw_25-10012457.sar PATCH    UNPACK_EXE2ND OK           SAP kernel patch: disp+work ,Release: 741


Above phases are run during UPTIME, and only EU_CLONE_MIG_UT_RUN was executed but not EU_CLONE_MIG_DT_RUN. Again, refer to step 2b in both picture A and B, application data will only move to target Database (HANA) once enter to DOWNTIME.

 

Picture D: Application data migrated to target Database (HANA) via phase EU_CLONE_MIG_DT_RUN:

 

EU_CLONE_MIG_DT_RUN (DOWNTIME): At downtime, entries of application data table (DT) are exported from shadow repo and imported to HANA in parallel, using the pairs of R3load same as phase EU_CLONE_MIG_UT_RUN.

 

Lastly, consistency of migrated content is checked by COUNT(*) on each table in the source and in the target database. These can be maintain/manipulate in /bin/EUCLONEDEFS_ADD.LST, reference to /bin/EUCLONEDEFS.LST with option below:

 

Ignlargercount                 ->          apply when table might change during cloning (reposrc)

Igncount                         ->          table count will be ignored

Nocontent                      ->           table doesn’t exist in HANA (DBATL; SDBAH – DB specific tables)

noclone                         ->            table doesn’t exist (/BIC* - BW temp table)

 

Hope this blog will helps other to understand more on DMO. Please rectify me for any incorrect statement. Extra input and info share are greatly welcome!

 

Cheers,

Nicholas Chang

Sum and count average in SAP HANA calculation view from single fact table

$
0
0

I have a count (Any numeric)  coming from my fact table (Which is column-1) , but I wanted to calculate the total of all column-1 which is 4577733 then I need to do division ,column-1/column-2 which is column-2 then use this sum to calculate the some (DIV)average.


1.png

The issue is the moment we put the “TYPE” in to the output, the numbers will be lined up to the Individual type which is as you see in column1 .But the objective is to get the total sum which 4577733 then divide it by column1. There could be many alternatives .But below is one of the option.

Here all these numbers are coming from the same table. So I wanted the sum (4577733) to be at the aggregated level.  In order to do that I need to use the same underlying table twice in the calculation view , on one side it will be aggregated at the “TYPE” level and on the other calculation data will be not be rolled up to the “TYPE “ level. We need to roll up the data in to 2 different levels as below.

This can be achieved using the calculation view.

2.png

Please notice here, there is no aggregation level. It’s just rolled up to the key figure.

 

 

 

 

 

3.png

Then next aggregate by type /category ,STVEHCAT & key figure.

 

.

4.png

So we will be using the 2 different aggregation levels to roll up the numbers (Same base table). Please note here that you are linking the 2 aggregation level by key figure only

 

5.png

6.png

7.png

8.png

Use the data Preview.

 

9.png

Thanks

Ramakrishnan Ramanathaiah

Does the thought of Disruptive Innovation Have you on Edge?

$
0
0

In his keynote at Sapphire NOW, Hasso Plattner spoke about change in SAP systems and applications not only being a good thing, but essential. He used the term,  ‘Disruptive Innovation’, an expression that often has management and staff alike shuddering at thoughts of what this means to their daily work existence.


As SAP solutions evolve and take advantage of new technologies, customers evolve as well. And, the change can be disruptive in a good way. What everyone wants to avoid is true disruption of ‘business as usual’.


How do organizations avoid the pain of change?  They don’t. They plan for it. By planning, they can avoid the worst of disruption and cultivate a formula for transition.


Plan ahead. Change doesn’t have to be an endurance test. Like anything else, it can be handled with a well thought out plan. We’ve found the fastest results occur when companies spend the necessary time planning ahead before they dive in. What does this mean?


There are ways to ease the change process. Plans need to be well thought out including getting input from all constituencies involved and everyone understanding the effects of the new technology. The plan also needs to be tested, often tweaked, and re-tested to get the architecture right. Organizations that recognize and quantify their objectives can maximize the value they derive from innovative technologies. Most importantly, they can reduce risk when new solutions are implemented.


There are SAP Partners with proven strategies for simplifying the transition process that can help you achieve your goals from an optimized SAP environment. The results are innovation without negative disruption, faster payback and lower total cost of ownership (TCO).


What kind of strategy would minimize your pain threshold when it comes to change?

Voice of Customer , Sentiment analysis & Feedback service

$
0
0

Some time ago, I was watching a game on TV and the broadcaster started comparing the two teams by showing what fans, people are talking about these teams and players on social media . Broadcasters excitedly compared the percentages and numbers of tweets of the teams and flashed which marquee player is currently more trending.

 

It left me wondering with more deeper questions, is this public sentiment really that important for the determination of the play outcome or it has really become important enough to analyse , even if it is not as vital as  players and teams skills or  their strengths &  weakness ,  their game tactics & strategy  ?  or is it  important as any other game analysis attribute ?

 

If a Presidential debate is happening or a new movie is launched or for example when a new consumer product is launched , I can see value of  such similar analysis on what people are talking about it as an important feedback.

 

Voice of Customer - Voice of the customer - Wikipedia, the free encyclopedia has a intrinsic value, always have , always will be.

 

Sentiment analysis is important and can not be ignored how so ever fickle it may sound initially, if probed deeper invariably you may find the value even if you take tweeter churning by TV broadcaster  during a game , as it is perhaps important for the sponsors and marketing campaign around the players, teams and sports and perhaps for hard loyal fans  and  perhaps it wins over loyalty of some bystanders.

 

Business' would  be always interested to get  deeper insight into market trends and customer perception of  their brand and products.  They would  like to Proactively respond to customer sentiment for improved brand loyalty, for stronger customer relationships & use it with their marketing campaign to guide business strategy.

The social media data is unstructured, voluminous and fast changing , is it considered big data , perhaps , I could imagine any solution which would do analysis on this data, have to do in real time text data processing.

Incidentally HANA does have text analysis capabilities in its native platform. HANA developer guide has text analysis section detailing how to use these options using SQL commands. One has option to build applications that can do these text and sentimental analysis using the indexed tables.

Text  from Social media, sources like Facebook, Twitter can be imported in HANA Tables including from a Hadoop system using smart data access. The content imported for analysis may not be character string , it could be HTML, XML strings or Word , PDF documents.

Although not truly real time, still Text Analysis Index can be easily built on these HANA  table containing social media contents  like twitter messages, using native HANA  text analysis based fulltext index creation busing Linganalysis technique to derive words into tokens as  nouns, verbs, adjectives, propositions, puncuations  etc. This way message is split into different segment of data and stored in an index table.Query to find out which noun is appearing most in this table can easily be relayed for example as most twitted or talked or trending player .

Messages , for example "New Yorkers  like Riley"  convey some sentiment in the message. Words like Great , wow for that matter profanity words also carry sentiments.  Words can be broken into different token types like person, city, organization, topic, and sentiment like as in weak  positive sentiment or strong positive sentiment and even weak or strong negative sentiment , by considering different tokens including emoticons. Native HANA allows building of Index table on these text messages using voice of customer configuration  so more intelligent analysis can be done to for example indicate if social media is cheering  for their team  or cheering which team more by looking at token values containing sentiment.

Another great native feature available on HANA Cloud Platform is feedback service which  allows to collect end user feedback. It provides predefined analytics on the collected feedback data - feedback rating distribution and detailed text analysis of user sentiment (positive, negative, or neutral) as illustrated below.

Web application , example shown below provides option to give feedback.

Feedback 1.JPG

User Feedback can be collected in the HTML form. This forms invokes the Feedback service created on HANA Cloud Platform.Feedback 2.JPG

HTML Form data is passed as AJAX Post request .

All feedback is collected by the HANA Cloud application and it can be further analysed for sentiment analysis.

Feedback 3.JPG

The applications allows probing the Feedback by Positive, Neutral, Negative and Request or Problem type kind of sentiment.

Feedback 4.JPG

SAP Certified Technology Specialist - SAP HANA Installation

$
0
0

Introduction

SAP HANA SPS 06 introduced the tailored data center integration (TDI) delivery model as an alternative to the initial appliance model. TDI enables SAP customers to perform HANA installations by certified engineers on certified hardware (TDI FAQ).

 

In this blog, I will explain how you can become such a certified engineer by passing the SAP Certified Technology Specialist - SAP HANA Installation exam.

 

For the SAP HANA Academy, I have performed countless installations of SAP HANA as of SPS 02 (GA) and upgraded many revisions. However, I found that experience alone is not enough. You do need to do some studying. But with some effort, anyone can do it. The exam is quit reasonable: it is not a Herculean task.

 

About the Certification

 

At the time of writing there are 2 editions of the SAP Certified Technology Specialist - SAP HANA Installation certification

 

For the latest information, see the blog by Tim Breitwieser on SCN: SAP HANA Education: Course & Certification Program 2014. The SPS 08 edition can be expected at the end of this year.

 

Prerequisite: SAP Certified Technology Associate - SAP HANA (E_HANATEC131 or E_HANATEC141)

 

Screen Shot 2014-06-26 at 23.54.39.png

 

Topic Areas

 

There are 40 questions divided over 7 topic areas, Roughly 4-6 questions per topic.

 

  • System Landscape Planning; implementation approach and installation requirements, plans for landscape evolution; development, QA, and productive system setup, and disaster recovery strategies.
  • Hardware and Operating System verification: review network setup, storage design, prepare hardware and OS. Identify relevant related SAP Notes.
  • Installation and uninstall procedures: prepare installer configuration file, implement prerequisites for unified installer. Execute and troubleshoot the installation and un-installation processes, including the removal of SAP Host and SMD Agent.
  • Post-installation: verify installation, initial system backup procedures. Setup connection with SAP Solution Manager and apply encryption
  • Scale out scenarios: scaling techniques and using lifecycle manager (LM) for scale out.
  • HANA studio: install studio on client computer
  • Multiple databases (E_HANAINS131 only): Use LM to manage single or distributed landscapes

 

Resources

 

As main resource training HA200 - SAP HANA - Installation & Operations is listed. This 5-day training covers not only installation but also administration tasks like backup and recovery, operations, etc. As mentioned, the prerequisite for the Installation certification is the Technology Associate certification, which has the same training as resource, so it probably is a good investment of your time. However, it is not a requirement.

 

Additional resources mentioned as preparation for the exam:

 

The SAP Help Portal on http://help.sap.com/hana_platform only shows the latest documentation, SPS 08 at the time of writing. For SPS 06 and SPS 07, you need to go to the SAP Service Marketplace on http://service.sap.com/hana.

 

SAP note 1905389 - Additional Material for HA200 and HA200R. This note contains additional information and documents. Most of the SAP Notes mentioned are also listed in the Server Installation and Update Guide under Important SAP Notes.

 

SAP HANA Academy

 

To help you prepare for the exam, I have recorded some tutorial videos on SAP HANA installations. The playlist SAP HANA Installations SPS 07 discusses concepts and shows how to install the SAP HANA server, studio and client. Upgrading studio using an update site and upgrading the server using Lifecycle Management (LM) is also included to help you prepare for the exam.

 

Sample questions

 

On the certification page, a link to a PDF with sample questions is included. The questions are the same for both editions. Let's get warmed up a bit and go through them; I marked the answers in bold and included a reference to the source with some tips and todos.

 

1. Which of the following Linux distributions is supported by SAP HANA?

a. CentOS

b. Fedora

c. SLES

d. Ubuntu

 

Source: This information can be found in the PAM and installation guides. Note that for SPS 08, Red Hat Enterprise Linux is also supported but not the open source Fedora equivalent. No support either for the popular Ubuntu and CentOS (popularised by Amazon Web Services cloud computing). The HANA client works fine on these distributions and you will find references on SCN but they are not supported. Always check the PAM.

 

To do: You need to be familiar with the product availability matrix for SAP HANA. What operating system is supported for the server and what operating systems for the client? Any prerequisites, like Java runtime?

 

==

 

2. Who is responsible for the setup of the custom storage connector in a shared storage environment?

a. The storage vendor

b. The storage connector implementation partner

c. The server hardware vendor

d. The SAP storage team

 

Source: Mentioned in the FAQ for SAP HANA Tailored Data Center Integration.

 

To do: read the FAQ. Additional information about this topic can be found in note 1900823, which addresses the Storage Connector API. The note contains two additional documents: Storage Whitepaper and the Fiber Channel Storage Connector Admin Guide. Get familiar SAN, NAS, LUN, global.ini, Split Brain and  STONITH (“shoot the other node in the head”). What purpose does the storage connector serve?

 

==

 

3. Which of the following file systems must exist before you start the installation of a stand-alone SAP HANA system? ( 2 correct answers)

a. Log volumes (example: /hana/log/<SID>)

b. SAP mount directory (example: /hana/shared)

c. Data volumes (example: /hana/data/<SID>)

d. Local SAP system instance (example: /usr/sap)

 

Source: Server Installation Guide. The other ones will be created if they don't exist. Question is a bit awkwardly phrased as the directory must exist and not the "file system".

 

To do: familiarise yourself with the recommended file system lay out. What is running locally, what centrally? Host Agent, SMD, LM? Where are backups stored? Why?

 

==

 

4. You have to verify the hardware and operating systems for an SAP HANA system stand-alone installation. Which of the following SAP Notes should you consider?

a. Note 1577128 - Supported clients for SAP HANA 1.0

b. Note 1730999 - Configuration changes in HANA appliance

c. Note 1755396 - Released DT solutions for SAP HANA with disk replication

d. Note 1824819 - Optimal settings for SLES 11 SP2 and SLES 11 for SAP SP2

 

Source: Server Installation Guide, section 1.3 Important SAP Notes. Note a and b are listed in the guide.

 

Todo: You need to be familiar with SAP Notes, e.g. 1514967 SAP HANA: Central Note and 1848976 - SAP HANA Platform SPS 06 Release Note. I recommend to read the notes mentioned in the section Important SAP Notes in the SAP HANA Server Installation Guide.

 

==

 

5. When you are installing a new SAP HANA system the installer aborts with an error message. Which of the following actions do you do first?  ( 2 correct answers)

a. Analyze the installation log file.

b. Restart the host and start the SAP HANA installation again.

c. Check the hardware against the Product Availability Matrix

d. Check the troubleshooting section in the SAP HANA Server Installation Guide.

 

Source: common sense and the troubleshooting section in the SAP HANA Server Installation Guide. You check the PAM before an installation, not after when things go wrong. Starting all over may work - or not - but is not the most professional approach.

 

Todo: read the troubleshooting section in the SAP HANA Server Installation Guide. What's the name of the log file(s)? What do you do if you need to restart the installation from scratch?

 

==

 

6. Why should you perform an initial data backup? ( 2 correct answers)

a. To enable log backups

b. To turn on overwrite mode for logs

c. To enable the backup editor

d. To turn off overwrite mode for logs

 

Source: Administration Guide. Tricky question. The installation guide mentions: We strongly recommend that you execute an initial system back up for later recovery of the initial system state. The WHY - recover initial state - is however not listed as a correct answer. The Administration Guide mentions:  Until an initial data backup or storage snapshot has been completed, the log mode OVERWRITE is active. In log mode OVERWRITE, no log backups are made.

 

Todo: Topics belongs to the post installation tasks. Make sure you are familiar with these tasks. How do you enable automatic start (parameter, flle)? How do you stop/start the HANA database? How can see what processes are up? What processes are there? How do enable persistence encryption? What gets encrypted?

 

If you are not familiar with backups for SAP HANA, watch the SAP HANA Academy playlist on this topic of Backup and Recovery.

 

==

 

7. Which of the following actions does SAP recommend that you perform once the installation is finished? ( 2 correct answers)

a. Perform a reboot of the SAP HANA database.

b. Perform an initial data backup.

c. Activate the debug mode in the SAP HANA studio.

d. Check the Alerts tab in the SAP HANA studio.

 

Source: again tricky question. Performing an Initial backup is documented in the Installation guide as recommendation. There are quite some additional recommendations in that guide (you can do a search on this word) but none is mentioned here. There is no need to reboot HANA after install and you would use the debug mode in studio when you are developing, so only answer d is left as an option. As an administrator you would check the alerts tab in studio regularly and after installation is a good place to start doing this but this is not explicitly documented. Not the best question, in my view.

 

Todo: familiarise yourself with the SAP HANA studio Administration Console. What tabs are there? Where are they used for?

 

==

 

8. Using the SAP HANA unified installer, which of the following roles can you assign to a new node?  ( 2 correct answers)

a. Standby

b. Worker

c. Slave

d. Master

 

Source: documented in the SAP HANA Update and Configuration Guide, section Adding Hosts to and Removing Hosts from an Existing SAP HANA System and the Administration Guide, Scaling SAP HANA.

 

Todo: read this guide and get familiar with HANA Lifecycle Manager. What are the three modes? What port for the browser? How to setup HANA studio for HLM? Where can you use the rename feature for?

 

Also read section Performing a Distributed System Installation in the Installation Guide. What are the advantages of a distributed system? What roles are there? What are the prerequisites? What purpose does the NTP service serve? What is unique in a landscape: SID? Instance number? When are multiple database on a single appliance supported (as of SPS 06)?

 

Introduction to High Availability for SAP HANA | SAP HANA.

 

==

 

9. Which of the following are prerequisites for adding a new host to an SAP HANA SPS 06 distributed system? (2 correct answers)

a. User <SID>adm has the same password on all nodes.

b. The SMD agent is installed on all nodes.

c. User group sapsys has the same group ID (GID) on all hosts.

d. 'hdbnsutil –reconfig –hostnameResolution=global' has been executed

 

Source: same as for question 8, documented in the SAP HANA Update and Configuration Guide, section Adding Hosts to and Removing Hosts from an Existing SAP HANA System. The Solution Manager Diagnostic (SMD) Agent is an optional component in the landscape and only relevant when using SAP Solution Manager. The <SID>adm user can have a different password on each node. Inter-node communication is typically done with SSH and keys.

 

==

 

10. You are preparing an SAP HANA installation. What are the requirements for the host name?  (3 correct answers)

a. Must start with a letter or an underscore.

b. The case must match the registered name in the DNS or /etc/hosts file.

c. Must be less than 8 characters

d. Must start with a letter.

e. Must be less than 13 characters

 

Source: Documented in the Server Installation Guide, section Installation Parameters with reference to Note 611361.

 

More Questions?

 

Unfortunately, but very reasonably, those that have passed the exam are not allowed to share the questions with anyone else, so I can't share any particular question with you here. However, a close study of the mentioned resources should provide you with enough knowledge to successfully pass the exam.

 

For edition 2013 (SPS 06) - E_HANAINS131 best to study the SPS 06 version of the guides.

For edition 2014 (SPS 07) - E_HANAINS141 best to study the SPS 07 version of the guides.

 

SAP HANA Server Installation Guide

1.1 - Software Components - Make sure you understand what role the components play and what editions there are.

1.3 - Important SAP Notes - familiarise yourself with these notes.

1.4 - Hardware and Software Requirements - consult the PAM for SAP HANA on software requirements and validated hardware configurations. Familiarise yourself with the Hardware check tool and how to run it.

2.1 - Preparation documents recommended file system layout with recommendation about initial backups, installation parameters like storageConfigPath with the location where a global.ini file is defined; required and optional parameters for distributed systems; required and optional parameters for the configuration file, password requirements; distributed system installation prerequisites. Support status for MCOS, MCOD. Make sure you are familiar with all of these settings and concepts.

2.2 - Running the installer documents how to perform this task with new requirements about the working directory and execution account. Familiarise yourself with these requirements and with the syntax of install and uninstall; what gets installed and what remains after uninstall. Study troubleshooting.

2.3 - Post Installation documents what you need and do not need to do after installation; how to enable auto-start, persistence encryption procedure. Expect questions about these topics.

 

Essential knowledge and preparation tutorial video.

 

 

Installing SAP HANA using Lifecycle Manager (SPS 07)

 

 

SAP HANA Update and Configuration Guide

2 - Preparing SAP HANA Lifecycle Manager (HLM) documents the different modes, requirements and parameters like HTTP port for the URL.

3 - Managing the Lifecycle of SAP HANA documents how you can rename a system, configure SLT, how to setup a connection to the SLD, how to add or remove a SMD agent, and how to add or remove additional systems (notice again the same requirements as mentioned in the Installation Guide), and how to perform an automated update. Execute all of these tasks on a training system; look into how Solution Manager is setup.

4 - Managing SAP HANA Components and Application Content documents how to add AFL, LCA and SDA.

5 - Troubleshooting


Installing SAP HANA: Lifecycle Manager - Update SAP HANA (SPS 07)




SPS 06 What's New : Lifecycle Management by Emil Simeonov



SAP HANA Administration Guide

10 Scaling SAP HANA documents scale up and out; study how to configure the network for multiple hosts, adding or removing hosts (10-2 - 10-4)

11 Managing Encryption of Data Volumes in the SAP HANA Database: understand the concepts.

13 Backing Up and Recovering the SAP HANA Database: understand concepts.

14 High Availability for SAP HANA: understand concepts, levels of disaster recovery support.

19 SAP Solution Manager for SAP HANA references document HANA Supportability and Monitoring read section 1 Remote Access.

 

SAP HANA Database - Studio Installation and Update Guide

Expect questions about supported platforms, prerequisites, update site configuration.

 

 

 

 

Additional Material

 

How to install a license file

 

 

Configuring Server Autostart

 

 

 

Did you succeed?

 

Feel free to post a comment about how the exam went. If there is any information missing, please let me know.

 

Success!


SAP HANA Academy – keeping you current....

$
0
0

The upside of a new release every six months or so is the speed of innovation that can occur.  The downside for us busy people is simply keeping up.  Well, you will be happy to know that the team from the SAP HANA Academy has you covered. Since the release at the end of May, we have added over 60 videos highlighting the new and updated features found in SPS08.  Rumor has it there are even more topics on their way in the coming weeks.  If you haven't checked it out, you should!

 

Why not spend a nice summer evening with your favorite cool beverage, a laptop or smart phone, and brush up on some SAP HANA happenings? 

Check out these new playlists introducing new capabilities with the SAP HANA platform:

SAP HANA Answers

SAP HANA Installations - SPS 08

SAP River Rapid Development Environment

SAP HANA SPS 08 - What's New

These topics received updated videos and highlighted new features from the release:

SAP HANA Security

SAP River

Modeling and Design with SAP HANA Studio

SAP HANA studio

SAP HANA smart data access

SAP Enterprise Cloud

Predictive Analysis Library

SAP HANA Cloud Platform

 

And one for the sports fan in all of us:  a video series showing how we used play-by-play data from the recent IPL Twenty20 tournament to create insight into the matches using SAP HANA and SAP Lumira.

Cricket

 

Don't miss any new updates – subscribe to our YouTube Channel today!  And if you have some free time while your colleagues are on vacation, why not tackle a hands-on step-by-step project like SAP HANA Academy Live2 – with videos and code samples to guide your way?  It's more fun than summer school!

Voice of Customer, Sentiment analysis & Feedback service on Twitter Feeds - Part 2

$
0
0

Sentiment Analysis on comments made by customers or users can provide insights into several aspects just not only on what they like or not like. Key is to approach it like a good questionnaire or survey would, which aims to find apart from obvious inferences from the individual ratings or responses of the questions but also not so easy inferences by analyzing responses using multiple attributes , like gallop surveys which can quite accurately predict results for a big constituency even by sampling only minuscule set of cross section of people.


In continuation of my thoughts on Feedback and Sentiment analysis shared earlier here at Voice of Customer , Sentiment analysis & Feedback service where I used text analysis features available on HANA to do sentiment analysis on the customer feedback collected from external web sites.

It was done by providing users or consumers a HTML based form to input their ratings and free text as review & feedback on a web site and then using JQuery/AJAX  call to a service on HANA Cloud Platform to collect these customer or user ratings and feedback .

 

But how about gathering the feedback from social media and analyze what people or customers are talking on social media, for example on Twitter.

 

Twitter provides a REST API service to search message feeds or tweets , however it requires requesting applications to authenticate all requests with OAuth. It means an oauth token or access token must be present in each request. As a consequence of this requirement, requestor of Twitter search service has to create a developer account on http://dev.twitter.com.  The consumer and access token keys will get assigned to the requestor account .

With these tokens, REST API search can be made on all messages or messages of a specific person or account.

 

Though there are several ways to invoke this Twitter API, Client based JQuery/AJAX methods do not work. Work around may exist to use JQuery/AJAX however this method is not recommended, as server side call to twitter API is the recommended secure method.

Searching  for Twitter messages can be implemented in many ways , for example a Java application to search twitter messages using Eclipse IDE and open source Java library available at  http://twitter4j.org can be a way. However I chose to use PHP script to search twitter messages on my local server. There are many open source libraries and widgets which work with OAuth. I have used twitteroauth PHP library. It can be downloaded from GitHub · Build software better, together and source can be easily included in PHP script.

 

Example of a search could be say If you want to search for all the tweets with word "architecture" in it and want to limit the search to return only 10 tweet messages.

 

 

 

Snippet of PHP code that will do this search is shared below.

 

<?php
$consumer = "put your consumer key";
$consumersecret = "put your consumer secret id";
$accesstoken = "put your access token";
$accesstokensecret = "put your access token secret";
$twitter = new TwitterOAuth($consumer,$consumersecret,$accesstoken,$accesstokensecret);
$tweets = $twitter->get('https://api.twitter.com/1.1/search/tweets.json?q=architecture&result_type=mixed&count=10');
?>

However If you want to make the search more flexible and would like to search for any word in Twitter messages then below is a snippet of PHP that i have used.

 

<html>  <head>               <meta charset = "UTF-8" />               <title>Twitter Search using PHP</title>  </head>  <body>               <form action = "" method = "post">               <label> Search : <input type="text" name="keyword" /> </label>               </form><?php  if ( isset($_POST['keyword'])) {  $tweets = $twitter->get('https://api.twitter.com/1.1/search/tweets.json?q='.$_POST['keyword'].'&lang=en&result_type=mixed&count=50');
if(isset($tweets->statuses) && is_array($tweets->statuses)) {
 if(count($tweets->statuses)) {
 foreach($tweets->statuses as $tweet) {
 echo $tweet->user->screen_name.'<br>';
 echo $tweet->created_at.'<br>';
 echo $tweet->text.'<br>';
 echo '*************************************************************************************'.'<br>';
 }
 }       }  }
?></body></html>

 

 

 

We can now search and print the twitter feeds using PHP script. However still an important question remains unanswered, which is How to save these message feeds or texts into HANA ? Again several approaches can be taken. In the past I have cURL libraries to invoke REST Web services and POST method to update. This time around I thought of using ODBC. I am using HANA on a Cloud Platform so I need to open a secure DB Tunnel to get access to database from my client machine. To open the secure DB Tunnel, i used another software application called Neo which is available at HANA cloud platform tools download area. When the tunnel is open, it provides you with a temporary host, port, user and password to access the HANA database on cloud instance. These values need to be provided for in the PHP script using ODBC to connect to the  HANA database . One will need similar values to connect to on premise HANA database in the same fashion. PHP Code snippet to connect to HANA database is provided below.



<?php
$driver = 'HDBODBC32'; // i am using 32 bit
$host = "localhost:port";
// Default name of your hana instance
$db_name = "HDB";
$username = "Your user name";
$password = "Your password";
// Connect now
$conn = odbc_connect("Driver=$driver;ServerNode=$host;Database=$db_name;", $username, $password, SQL_CUR_USE_ODBC);
if (!$conn)
{    // connection failed     echo "ODBC error code: " . odbc_error() . ". Message: " . odbc_errormsg();
}
else
{
echo " connection sucess";
?>

 

 

Once the ODBC connections are established , all SQL operations like Select & Insert statements can be done by forming the SQL command in a string variable $sql and using  function odbc_exec()  call to execute the SQL statement . For example $result = odbc_exec($conn, $sql);

When all operations on HANA database are finished, for the final action to close the data base connection function odbc_close() be performed. For example statement odbc_close($conn);



HANA table which stores the Tweets data’s named as "MyTweetsTable" with columns to store tweets information like the tweet user, created date, message or TEXT and hashtags values in the message .

 

Using HANA SQL statement  Create FullText Index "TWEETS_myindex" On "MyTweetsTable"("TEXT") TEXT ANALYSIS ON CONFIGURATION 'EXTRACTION_CORE'; another HANA table  gets created with text analysis performed on the Tweet Messages. Another configuration option for text analysis is  LINGANALYSIS_FULL.

This Text analysis option parses the text message into different components of grammar like noun, adjectives etc. & these words once categorized by their grammar called tokens are grouped into different categories . For example word SAP is a noun of category called Organization. CNN will be categorized as Media.

Since Text analysis Table also maintains how many times a token or word has occurred.

As i was talking in my previous post , it is now fairly easy to know what word is trending most.

 

For example i searched Twitter messages for word "SAP" and I found "SAP Package Technologies"  was being talked most in the tweets .This text analysis index table can be queried in many different ways to get more insights into the data.


twitter feed text analysis.gif

 

 

 

Similar to Text analysis index table another index table can be built  to do the sentiment analysis on these tweet messages. This time use configuration option EXTRACTION_CORE_VOICEOFCUSTOMER instead of EXTRACTION_CORE used in previous example.

 

Note HANA will not allow creation of another text index , if a index already exists on a column . i.e. since the text column in MyTweetsTable already had text analysis index  built previously. I needed to create copy of the original table storing the tweet messages. It can be done easily by leveraging the same SQL create statement used to create the original table, however this time I used a different table name. The data could be easily copied by another SQL statement  -- insert into "Name of  the target table_copy"  select * from "Name of the source table_orginal" --.

 

In the new voice of customer table index  table, the tokens  categories are analyzed much further to not only indicate if these are any organization names, product names or social media names but can also  categorize the tokens into a list of positive and negative sentiments like weak or strong . It can even probe messages into identifying if the messages are calling for information  and categorizes these as Request types.

 

Examples of what was mentioned most in these tweets ? and what kind of sentiments were in these tweets ? are easily feasible in the voice of customer table index. See  snapshot images below from a query on the table.

VOC - what was mentioned most.JPG

Notice most sentiments were weak positive comments.

VOC - What Sentiments used  in tweets.JPG

If we want to explore what were some of the words used in strong positive sentiment tweet messages. Those can be also easily identified. Shared below are results for the same. The same analysis can be done for negative sentiment comments as well.

VOC - What made Strong Postive Sentiments used  in tweets.JPG

 

 

Also , since younger in age people tend to use emoticons more, query can be done on this data to get what emoticons were used in these twitter messages to get the general feeling of happy , smily and sad faces and image below shared does show that.

VOC - What emoticons were used in tweets.JPG

 

One of the categories, in which words were grouped together's PRODUCT .

Image below shows what kind of Products were being referred in the tweets.

VOC - What PRODUCTS  were named in tweets.JPG

 

If one wants to know what Organizations were mostly talked about in these tweeter messages, query on the indexed table can provide the result.

VOC - What organizations were used in tweets.JPG

 

Since Twitter messages have lots of other useful information like how many times it has been re-tweeted or liked, user demographics and location etc. By capturing all these attributes from the Twitter API , a much more meaningful sentiment analysis can be done.

So next time when i watch TV , broadcasters are telling people in different parts of the world are cheering which team most ,  it will be no more a mystery for me !!.

 

Analysis need to be carefully done on this twitter ( or any social media) data to get insights, which could be useful to the business. Needless to mention it is highly dependent upon quality of data. Perhaps it makes more sense to monitor the customer sentiment during certain time windows, like state of union address by president, or during special business events i.e. whenever there is new product launch or commercial being played on media.

 

Technology is here to support business and looking to make it work in real use cases, where positive sentiments over time increase (or there's decline in negative sentiment ) .

cron to schedule a backup Error

$
0
0

When i try to execute backup.sh i am getting below error

 

./bacup.sh: line 1056:/usr/sap//HDB//trace/script_log_backup_fri.txt: No such file or directory

 

 

./bacup.sh: line 365:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

 

 

./bacup.sh: line 385:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

./bacup.sh: line 392:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

 

Please help me to Solve.

 

Thanks,

Gopinath.

SAP HANA Event Belgium

$
0
0

PuurC.png

Technology is all around us, but not always as easy to grasp, let alone understand how it can change our lives.

HANA seems to be such a difficult to grasp technology for many companies. Having lived with the client-server architecture for decades and being used to traditional databases, the challenges of SAP HANA may seem daunting.

Worse even, SAP initially marketed SAP HANA as a really fast database, or at least, that's how many people understood the message.

 

So we at our company decided that the time was right to level the playing field and explain organizations what the real added value of the HANA platform means for their company.

crowd.JPG

 

Technology is Childsplay

Any marketeer will tell you that location is one of the 3 main pillars to get your message across. To prove the invited crowd how technology is overtaking daily life and how the next generation of employees is playing with science and technology in ways that we never imagined, we organized the event at Technopolis.

Technopolis 002.jpg

We wanted to present the HANA platform to our audience, not from a technological perspective, but from a perspective that a kid would take.

Not about: How does it work?

But about: How can I play with it? What will it do for me?

 

Content

Keynote

Dr Juergen Hagedorn from SAP Walldorf was kind enough to do the keynote for our event. He just returned from a holiday, but made a smashing effort to be present at the HANA event and level the playing field. During his keynote, Dr. Hagerdorn lined out the future vision on the SAP HANA platform and explained that HANA is more than just a database. It's an entire platform on which you can build your enterprise IT systems, both SAP and non-SAP.

 

User Experience - Powered by HANA

MRP.png

Next up, Yours truly dived deeper into the User Experience, powered by HANA. Everyone already knows what Fiori is, and what it looks like. Many people however, doubt why so many Fiori applications only run on Hana. During the UX session, we demonstrated what the additional computing power of HANA can do to your User Experience.

 

Remember that User experience is not just about a fancy screen. It's about bringing data and functionality to the user, in a way that he understands what he's doing and he can do it intuitively. In that sense, HANA is very important to UX because the extra computing power allows you to aggregate data n the fly, and resent it to the user in a summarized vision. (preferably graphically)

 

We also demonstrated how the HANA system points the employee to problems that will arise in the future and actively helps the employee in understanding and analyzing the problem. Moreover, the HANA system can simulate possible solutions and propose the most applicable scenario's to the user. In turn, the employee can visualize each scenario, understand what the solution entails, and eventually choose the most appropriate solution.

 

More scenarios such as Advanced Search and step-by-step applications followed, which really impressed the audience. It helped them understand how HANA does not just speed up their current processes, but it actually makes the employees more efficient and helps them deliver better quality of work.

 

Data Analysis - Powered by HANA

Our own data-scientist Kim Verbist demonstrated the audience how SAP HANA, in combination with Lumira, makes every employee a data-scientist. The ease of use that Lumira brings into play, enables every employee to explore data within his/her limits of authorization and build reports on top of it.

 

No longer do you require lengthy phases of blueprinting, developing, change requests, and testing, but now you can simply organize workshops with the employees and experts side by side. They can work together interactively and at the end of the workshop, have a complete report and storyline ready for productive use.

 

HANA doesn't just speed up your report execution, it speeds up the process of building new reports!

data.png

 

Business processes - Enabled by HANA

Our distinguished Business Process veteran Luc De Winter explained the audience how the massive computing power of HANA enabled opportunities for new business processes, which were not possible before due to technical limitations. gone are those limitations now, and the sky is the limit.

 

Of course, you can just use HANA as a fast database underneath your ERP system, but why linger in the consume phase, when there is so much more to gain by improving your existing processes.

He demonstrated his point by explaining the allocation process in the Retail industry, which typically generates massive amounts of data and takes a nightly batch job to run. On HANA, this process can run online.

Pink_Shoes_with_handbag.jpg

If we take it a step further and adapt our processes to the new reality, we can also enable new business processes. Which we did on our own RetailOnHana system called with a process called re-allocation.

This is something which does not exist in SAP-Retail today. The allocation process is always about pushing products from distribution centers to stores, which is difficult enough on a classic environment. With re-allocation, you can analyse which stores are selling a lot of certain products and are running out of stock, meanwhile also identifying other stores that have too much stock and are running behind on their sales. Rather than discounting leftover stock, we can analyse which nearby stores can re-allocate some of their stocks to stores that are showing impressive sales figures. This maximizes your margin and efficiency and was not possible before.

 

In a last phase, you can even start inventing possibilities. The example showed how we can analyse historical sales data to cross reference popular products, and determine which products are often bought together. the pink handbag and the pink shoes appeared to be a quite popular combination, so using the process of allocation and re-allocation, we can push leftover pink handbags from shop A to shop B, and distribute the pink shoes from the distribution center to shop B and start a new fashion trend!

 

The road to HANA

By this time, our audience had fully come to grips with HANA as an enabler for their data, processes and people, and were curious to know how they can move to a HANA based architecture, with the least risk and friction as possible.

 

My colleague and fellow Mentor Tom Cenens described how an organization can start experimenting with the HANA platform, move their data and start exploring and eventually moving their entire ERP system onto HANA. He described the different low cost options are available to start taking your first steps on the journey to HANA and on how to eventually migrate entirely.

Path forward.png

Some nice additional features to SAP HANA are the HANA answers, where your employees can actually collaborate around questions coming from the organization itself. These can be business questions, but can just as well be technical questions. This type of built-in collaboration platform helps to ease the transition by combining the available talent and intelligence to overcome hurdles.

 

Feedback

Afterwards, we lingered a while longer to engage with the participants and give them more one-on-one advice. We also received a lot of positive feedback. Many participants were fearful that the event would mainly be about marketing HANA, but they were pleasantly surprised to receive a good mixture of Technical aspects, vision and Business cases.

 

Some participants even came to us and said “Today, you gave us the ammunition we need, to justify an investment on HANA, to our business”.

In the past business would still regard HANA as a technical investment, but with the demos that we gave, we managed to convince the audience that the HANA Platform really is a business solution.

 

reception.JPG

 

Final words

The Belgian HANA event was a tremendous success, made only possible with the help from many people. So I want to give a big shoutout to everyone that participated, behind the scenes and on the stage. All those people that invested evenings and weekends in setting up our own HANA system, setting up the demos, providing data, organizing the location,…

 

A big Thank You!

First SAP HANA Innovation Award – an exciting inauguration!

$
0
0

Innovation is what drives progress and what changes society; without it progress would come to a screeching halt.   Whether you are a leader or follower, innovation and disruption is going to happen. You need to lead with innovation, or be left to respond to disruption. These are the critical success factors in business today as the pace of change accelerates, and more importantly, innovation impacts how we perform our work and live our daily lives

 

However, innovators are most often the unsung heroes who dare to dream, possess creativity to envision new possibilities and the courage to seize opportunities presented by new technologies. The SAP HANA innovation awards were designed to showcase and honor customers innovating with SAP HANA.

 

The winners were announced on June 3 at Sapphire. Watch the video to experience the excitement as the winners are announced and the first place winners receive their “big cheques” for charity. Congratulations to the winners on your achievement.

 


Trailblazer Award 2.jpgBig Data Award 2.jpgSocial Hero Award.jpg

 

View winners on the award website , read the blog by Steve Lucas or listen to the session from Sapphire as Dan Lahl from SAP presents the HANA innovation award together with Steve Pratt from CenterPoint Energy, the first place winner of Big Data Innovator category.

 

Everyone loves a good contest

 

  • Customers using SAP HANA in production from 15 countries were invited  to enter their innovation story in 3 categories : Trailblazer Innovator; Social Hero Innovator; Big Data Innovator. Thanks to every one of you who participated and shared your story – every one of you is a winner!
  • Check out the 27 amazing stories that were shared by leading brands from 7 countries across the globe.  They provide a simple and powerful way for you to relate to their challenges and how they are empowering specific individuals by using SAP HANA.  More importantly they enable you to find your own inspiration to the possibilities of using SAP HANA to drive innovation.
  • 22 Finalists were selected by public voting format similar to American Idol and Dancing with the stars.  It was exciting to see how engaged folks got and how they mobilized colleagues and friends and even family to vote. The social buzz for the contest hashtag #HANAStory generated over 6M  impressions on twitter, 6.7K votes and 76.1K website visits - with a voting frenzy on the last 2 days of voting.
  • stats.jpg

 

social media 2.png

  • The SCN team were engaged and developed missions for the SAP Community to get involved almost 1800 SAP Community members participated in the Ready for the Innovation.award mission.

missions.jpg

 

 

  • SAP User groups were invited to nominate judges and user groups from  from Germany, Netherlands, Spain, UK and USA submitted nominations. We also had and an incentive for user groups to help spread the word and inform their members about the opportunity to participate in the award.   I am pleased to announce that ASUG has won the user group incentive for the most number of entries. Congratulations ASUG
  • 11 judges comprised of SAP User Group members and SAP employees reviewed and scored each of the 22 finalists using a score card to select the final winners. Thank you to all your hard work.


SAP HANA Innovation Awards - Be a Winner Next Year!

 

 

 

 

SAP and the community wants to learn from your amazing story.

tweet.png

How are you using SAP HANA to transform and re-imagine your business or changing the world.  Tell us how you are using SAP HANA to:

  • Improve business processes and performance
  • Overcome data management challenges
  • Drive new business models
  • Empower your workforce, customers and society

Stay tuned to more details about the contest

 

Last but not least please fill in the short survey on your experience and help us improve aspects that did not work so well. Link to survey

 

 

Thank you to everyone who supported this incredibly exciting project.

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>