Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

SAP HCI Blog Series: Integration Platform as a Service (iPaaS) Part 3

$
0
0

Part 3; Understanding the Power of SAP HCI: SOAP2SOAP Scenario Configuration

Configure the Systems, Security Artifacts and Communication Channel

  Introduction: In Parts 1 and 2 of this series of blogs, issues relating to setting up the modeling environment with necessary tools as well as the security artifacts necessary for configuring a working scenario have been discussed. In this blog, a sample working scenario will be further developed (configured), deployed as well monitoring features examined. So, let’s get started. Below is the diagram of the working scenario discussed here before configuration;

workspace 2.png

Top down approach will be adopted in configuring this scenario; the following steps need to be performed;

  1. Configure sender system, receiver system (name of the sender and receiver system, this is optional) and Sender authorization (this is the Public Key pair )
  2. Import WSDL file from SAP PI repository or locally
  3. Parameters file for reference to Artifacts
  4. Mapping between Sender and Receiver system or import existing mapping object from SAP PI Repository
  5. Sender and Receiver communication channels


Configuring the Scenario

1) Configure Sender and Receiver Systems: Click on the sender system as seen in the diagram, check the property area and enter appropriate name for your sender system without whitespaces between names. Under sender authorization, click on the browse option to import your Public SSL Key from where it stored as shown in the diagram below and this is usually in the case of certificate based authentication;

workspace 3.png

Follow the above steps to configure the receiver system name as well but there is no need to browse Public Key on the receiver side.

2 Import WSDL filefrom SAP PI or local file system; For this blog, existing service interface from SAP PI is imported. To do, right click on the name of the Integration project in the explorer and choose the “Import PI Content” option and choose the option of the type of artifact to be imported and in this case, service interface. Tip: your WSDL file should have at least one operation, a service name and port name. When you export a WSDL from the integration builder Directory it will contain the necessary elements. Proceed to import the WSDL/service interface for both the sender and the receiver system and this will be used during mapping. Right Click Integration project à Import PI Content. The WSDL file could also be imported from a local file system, so it is not always mandatory to have SAP PI to use SAP HCI.

 

3 Create Parameters filefor reference to parameters: This parameter file to be created will contain the reference parameters to be used in this this scenario, this parameters include, the location of the WSDL file, endpoints during the configuration of the communication channels. To create the parameter file, right click on the Integration Project, choose New, Other, General, file. Name the file as “parameters.prop” , follow the wizard and make sure it is saved in this directory “src.main.resources” and click the finish button.


file 1.png

The parameters.prop file is created and the entries for the location of the WSDL files and the endpoints can be entered as seen in the figure below;

parameter 1.png

To reference this parameters during configuration of the channels, the values on the right handside before the “=” sign is enclosed the double curly braces, e.g {{SOAPSender_WSDL_URL}}.

4. Mapping between Sender and Receiver system: To implement the mapping, two options are possible, an existing mapping that meets the requirement for passing data from the source system to the target system can be imported from SAP PI using the same steps for importing service interfaces. However, the mapping used in this scenario is defined locally within the integration designer. To create a new mapping, right click on the integration project, click new, other, then under SAP HANA Cloud Integration, choose message mapping and follow the wizard.

mapping1.png

Choose the source and target element (these are the WSDL file imported from SAP PI ESR earlier, it could also be WSDL file imported from a local file system) and switch to the definition tab to do your graphical mapping and use all the functionalities in SAP PI graphical mapping. However, there is no possibility for creating user defined functions for now, and only graphical mapping is supported. In this demo, the concatenation function has been used and the drag and drop functionalities of the graphical mapping displayed in the figure below;

mapping2.png

Go to the modeling area, and right click on the mapping artifact, choose the option “assign mapping” and select the mapping was created earlier.

5. Sender and Receiver communication channels: The communication channels will be configured in this step. Right click on the channel artifact on the sender side and choose the SOAP Adapter as the adapter type.

channel1.png

Choose the Adapter specific icon to configure the endpoint and link to WSDL URL, here the parameters defined in the “parameters.prop” file will be used.

channel2.png

Repeat the steps above to configure the receiver communication channel as well. Save all the changes to the Integration project and you should the screen below;

workspace4.png

Deploying the Integration Project

Once all the configurations are done and all the changes saved, then it is time to deploy. If the red marker appears on the integration project after saving, then it shows the configuration is not complete check the problem console to fix this. To deploy the project to the Cloud, right click on the project and choose the option “Deploy Integration Content”. A pop up is shown, asking for the tenant ID, once this is provided the deployment will be completed, if there is a problem or an error, a pop up will indicate this.

deploy.png

Click on ok and the deployment will be complete.

Testing

This scenario will be tested by triggering a message from SOAP UI (SOAP UI 4.0.1 other versions may give problems). To do this, the WSDL file will be loaded into SOAP UI, and the SSL settings will be done by uploading the Private Key of the Public Key (loaded into Sender System) into SOAP UI. This is done using these steps;

  • File ---Preferences---SSL settings and uploading the Private Key from the Key store and the password if any.

Of great importance is the URL to be called from SOAP UI. After the WSDL file (WSDL file used on the sender side) is loaded into SOAP UI, click on the edit endpoint option and construct your endpoint in this format

https://<serverurl>/cxf/<path-defined-in-the-sender-communication-channel-endpoint>

  1. e.g https://<servername>/cxf/demo/blog/meterbilling

Once all this is done, then messages can be sent from SOAP UI to SAP HCI.

Monitoring

After messages have been sent, then monitoring can be done in SAP HCI. Switch to Integration Operation perspective to monitor the messages being received by HCI. The diagram below shows the sample messages received by SAP HCI and further details and different search functionalities can be used from the monitoring tool.

eclipsemonitoring.png

The monitoring screen shows different messages that have been received, some failed and some were successfully processed. Monitoring is also possible by Integration Designer/Developer using the SAP HCI WebUI interface for processed messages in the cloud!


webui5.png

Conclusion:

  In understanding the Power of SAP HCI, these sets of 3 blogs provides a complete walk-through scenario for using SAP HCI as an iPaaS (integration platform as a service). SAP Hana Cloud Integration makes it easy! You are now able to setup the environment and configure an iFlow. We have seen that SAP HCI is a, light weight, cloud based solution which provides possibilities to use prepackaged integration content out-of-the-box and re-use existing SAP PI content. If you have questions, do not hesitate to contact me. I’m looking forward to update you in my next blogs where I will describe the new connectivity options and additional features of SAP HCI.


Written By; Abidemi Olatunbosun, Rojo Consultancy BV, The Netherlands

Contributor; Geert van den Reek, Rojo Consultancy BV, The Netherlands


Where are all extensions in the HANA ecosystem: An initial search

$
0
0

I’ve been blogging about extensions on the HANA Cloud Platform and I wanted to try a similar approach to the broader HANA ecosystem.


Note: "Extensions" have a particular implementation / flavor in the context of the HANA Cloud Platform.  In this blog, I'm referring to "extensions" in a more generic manner.

 

Let’s take a quick look at the high-level HANA architecture which will act as the foundation for our discussion.image001.jpg

Extensions in the HANA Services layer

 

The first potential extension point is in the HANA Services layer.  There was a recent announcement regarding the use of Rogue Wave IMSL Numerical Libraries which will be embedded in the HANA analytics engine.

image002.jpg

If I remember correctly, this extension point was described by Vishal Sikka at the Sapphire last year but the Rogue Wave announcement is the first example that I’ve seen. I have no idea if a certification for such HANA Services extensions exists. I also don’t know whether such extensions are only possible if included in the SAP-delivered product or whether partners could offer such extensions to individual customers as a service offering.

 

Extensions in the HANA XS layer: Lumira Server as an example

 

I started examining this topic based on thoughts related to the architecture of the Lumira Server.

image003.jpg

Note: The Lumira Server is currently not yet in GA and the new solution is currently undergoing validation with the help of select customers.

 

Based on our diagram depicted above, the pattern for the Lumira Server looks like this:

image004.jpg

The components used in Lumira Server are HANA-XS-based and non-trivial

 

Component

Description

sap.bi.common

Security roles and other common files.

sap.bi.da

Data acquisition services. Supports the creation of datasets from CSV and XLSX files.

sap.bi.discover

Data discovery services. Supports searching within a dataset, the Dataset sap.bi.discover Overview page, visual discoveries, and the "Did You Know" feature.

sap.bi.launchpad

Launchpad features. Supports the item catalog and sharing functionality.

sap.bi.thumbnails

Service that generates the thumbnail images that are saved with visualizations and that appear on the dataset overview page.

sap.bi.va.core

Client-side resources required by the Visual Explorer application.

sap.bi.va.tp

Resources required by third-party libraries used and distributed with SAP Lumira Server.

sap.bi.va.vxtab

The Visual Explorer application and query services.

 

Note: The document “SAP Lumira Server User Guide“ from which this table is based has since been taken off-line. I found the document at this URL on December 6, 2013.Usually, I try not to reference documents that are no longer available but I’m making an exception.

 

The current architectural pattern used by the Lumira Server is the usual one but what if a partner wanted to use some of the components to create their own application.

image005.jpg

This is a possible extension point that would provide value to the entire SAP ecosystem yet it is largely unknown and unexplored.


Conclusion

 

Although the Lumira Server framework scenario mentioned above is a hypothetical example.  There are, however, concrete examples of individuals providing HANA-XS-based frameworks / functionality for re-use by other developers. For example, I found a github project that provides an open-source SAP HANA XS JavaScript utility to reverse geocode data in HANA tables (It appears that there is similar functionality in the latest release of the HANA Services layer).

 

What is interesting is that there are very few examples of such HANA-related extensions / frameworks on github (I only found two). There might be other examples on SCN forums or blogs but their placement - buried in some forum post - doesn’t make it easy for developers to find them.

 

There are also no commercial offerings for such frameworks. If you look at the HANA marketplace, you will find a variety of HANA-based applications but no offerings directed towards developers.  Some might say that this marketplace is primarily for end-users rather than developers yet I know of no other HANA marketplace.   If a developer / partner wanted to sell their HANA SQL Script code, where else would such products be offered for sale?

 

Such HANA-related offers of commercial frameworks, however, are critical for the broader acceptance of HANA as a development environment. Although open-source HANA-related frameworks would be useful, enterprise developers often require a vendor-supported framework to provide the 24/7 support and legal responsibility. 

 

Note: The developer community associated with the data science programming language “R” is quite mature / vibrant and there is also an integration between “R” and HANA. Yet, “R” code is primarily restricted to big data scenarios in HANA and is primarily focused on data scientists rather than developers - especially those who are developing domain-specific / business applications. For such individuals, the business algorithms / logic in the Business Suite that are being moved from ABAP to HANA might be useful as a framework.

 

I get the feeling that there are many applications being developed on top of HANA and a variety of customer-specific projects using HANA (the use case for RDL) exist but the lack of extensions for HANA demonstrate that the broader HANA-related developer ecosystem is still relatively immature.  

SAP HANA Analytical view

$
0
0

SAP HANA Analytical View

Now we will create an Analytic View which combines purchase order table data with the product attribute view we created in the previous step.

In your models package create a new Analytic View.

Name your new view AN_PURCHASE_ORDERS and click on finish.

Drag and drop the AT_PRODUCTS attribute view from the previous part of the exercise into the logical join of your new view. Add the purchase Order and purchase Order Item tables to the Data Foundation

Of your view.

Create a 1: N referential join between purchase Order and purchase Order Item on the Purchase Order Id column.

Now add the column Created At from the purchase Order table and the Purchase Order Id, Purchase Order Item, Product Id, Currency, and Gross Amount columns from the purchase Order Item table to the output

by clicking on the grey dot.

In the Semantics scenario, set the Default Schema to SAP_HANA_EPM_DEMO

Select the Logical Join box in the Scenario navigator at left.

You can then drag and drop the ProductId_1 from the data foundation to the Product Id column of the AT_PRODUCTS view creating a 1:1 referential join.

Return to the Semantics Scenario, set Gross Amount as a measure and the other fields as attributes.

Save and activate your Analytic View and then do data preview. (You can either right click on view name and select data preview or simply click on data preview icon)

You should see output similar to this.

You can also do the analysis of the Gross Amount by Country.

It will help Beginners.

For reference How to create the SAP HANA Attribute View

SAP HANA Attribute View

Thanks,

Phani.

SAP HANA Calculation View

$
0
0

SAP HANA Calculation View

Right click on the models package and select new calculation view.

Name your calculation view as CV_COUNTRY_TARGET and un check enable multi Dimensional Reporting.

Now drag and drop the Analytic view (AN_PURCHASE_ORDERS) that you created earlier and also

The COUNTRY_TARGET table.

Now add a projection (from the box you see to the left) for the AN_PURCHASE_ORDERS view.

Now click on the arrow you see on the right of your view, drag and drop the arrow on the projection.

Add Product Id, Gross Amount, Category, Product Name, Country columns to output.

Add a join operation and connect the country _target table and the Projection1. Make a 1:N

Inner join for Country in Projection_1 to Country in country_target table.

Next right click on the calculated columns on the right side and select new.

Name it as “Performance”. Give it the data type “VARCHAR” and length “25”.

In the expression editor type:

if(“Gross Amount”<”Target”, ’Target Not Met’,’ Target Met’)

Finally Validate your expression and click ADD.

Now connect the join to the Projection. Select Projection and add all columns to output.

Next, select the Semantics scenario and choose SAP_HANA_EMP_DEMO as schema.

Choose the Columns as attributes.

 

Save and activate your view.

Do a data preview and choose raw data.

out put values.

Hope it will help Beginners.

 

Please Refer the below documents.

SAP HANA Attribute View

SAP HANA Analytical view

Thanks,

Phani.

“Into the Real-Time Age Together with SUSE”

$
0
0

section5-bg.jpg


SAP Vice President Chris Hallenbeck at SUSECon 2013:

“Into the Real-Time Age Together with SUSE”

 

“HANA is just the start. We embarked on this journey with SUSE and we’ll continue on it with SUSE.” Chris Hallenbeck, Global Vice President of HANA Solution Management at SAP, used these words in his keynote speech at SUSECon 2013 in Orlando to sum up the latest highlight of this long-standing collaboration. Together with around 550 participants at the annual partner event of the SUSE Enterprise community, I discovered why SUSE is such an important partner for SAP as it enters the real-time age.

 

Completely New Data Platform

 

SAP has long since relied on the SUSE Linux operating system to enable its customers to achieve higher levels of availability and improved performance of their business-critical SAP applications – and at a low cost. SAP also works very closely together with Intel, SUSE, and global hardware vendors to develop its in-memory appliance, HANA, which offers much more than just innovative database technology. “HANA is a completely new data platform. It will totally change the way in which we build applications and perform analyses,” predicted Chris at SUSECon.

 

chris-hallenbeck.png

HANA can be used as an application server to significantly simplify and accelerate the development of business applications. It also enables vast amounts of data to be analyzed in real time, complex economic interrelationships to be predicted, and this knowledge to be used in order to make better decisions. The key term here is predictive analytics. By using HANA, companies are in a position to meet the principal challenges of the future; in other words, to simplify their system landscapes and increase their agility.

 

Other Top Trends Also Integrated


For optimal use of HANA and since its official announcement by SAP in May 2010, SUSE Linux is the recommended and supported operating system. This means that SAP’s in-memory applications run on the open source platform provided by SUSE. In addition to its proven benefits, this platform provides the basis for an IT infrastructure that goes far beyond in-memory by also integrating top trends such as cloud computing and mobility and by allowing the seamless connection of products from other manufacturers.


“We will vastly expand our use of open source to enable us to meet the growing requirements for a flexible IT infrastructure in the long term,” assured SAP Vice President, Chris Hallenbeck, to the SUSECon participants. “Since we have been successfully driving developments and innovations forward with SUSE for many years, they are the ideal partner to go down this path with us.”

Access Chris Hallenbeck keynote at SUSECon 2013 (YouTube)

Drive Real-Time Business with Real Choice

$
0
0

Join us on Tuesday, May 6, 2014, at 12 p.m. EST for an informative Webinar, where you’ll hear directly from SAP experts on how customers are leveraging SAP Business Suite powered by SAP HANA to drive higher business value, lower IT costs, and allow more enterprise-wide engagement.

 

Date: Tuesday, May 6, 2014

Time: 12:00 pm EST

Speaker: Amir El Meleegy, Director of Product Marketing - Database & Technology, SAP

 

The fast-growing deployments of technologies such as in-memory and cloud have dramatically accelerated the massive movement toward real-time business. This is the main reason why SAP launched the SAP Business Suite powered by SAP HANA– as a major innovation changing the status quo in IT. You now have the opportunity to leverage an integrated suite of real-time applications designed to act as your unique business innovation foundation, delivering much simpler IT with a real choice of deployment and a truly personalized business user experience – all on the re-imagined platform SAP HANA.

 

More than 800 companies have already acquired the SAP Business Suite powered by SAP HANA – only one year after its launch – to drive real-time business value and lower IT costs, across all lines of business and in the context of different industries.

 

At this session, we will also:

  • Examine the comprehensive suite of real-time applications on a single in-memory platform for unlimited business innovation
  • Learn how you can simplify IT while getting more choice of deployment (cloud, on premise, or hybrid)
  • Discover the new SAP Fiori user experience providing business users with personalized yet simple access to real-time information for instant insight to action

 

Register Today!

SAP HANA Preconference Sessions at ASUG Annual Conference

$
0
0

You are one of the following: an application developer, business leader, architect or system administrator.

 

If I am correct, you should know about the following SAP HANA full day preconference seminars offered at this year’s annual conference in Orlando on Monday, June 2. Yes, there will be other SAP HANA sessions throughout the week, but there won’t be any like these!

 

1. Go Deep

Need to develop an app on SAP HANA? Come learn from 2 of SAP’s best HANA product managers! Thomas Jung and Rich Heilman will teach you all you need to leverage the power of SAP HANA from ABAP applications.

 

Application Development Based on ABAP and SAP HANA [HANDS-ON]

This hands-on session will give an overview on how to leverage SAP HANA from ABAP applications that integrate with the SAP Business Suite. Presenters will show concrete examples and best practices for customers and partners based on SAP NetWeaver AS ABAP 7.0x and 7.4. This includes the following aspects: The impact of SAP HANA on existing customer-specific developments; advanced view building capabilities and easy access to database procedures in ABAP; usage of advanced SAP HANA capabilities such as text search or predictive analysis from ABAP; integration into user interface technologies (Web Dynpro/FPM, SAPUI5); and best practices for an end-to-end application design on SAP HANA.


OR


2. Get it All!

Don't miss a thing! SAP HANA is feature rich and moving fast! It is hard for anyone to stay on top of all the innovative components that come with the platform. Come and learn what you are missing, directly from the responsible SAP HANA product managers. This session will leave you with a complete picture of the value and features offered by all of the SAP HANA components.


Learn about architecture, deployment landscapes, administration, application development and best practices. See what piques your interest so that you attend the right SAP HANA sessions and exhibits throughout the conference.


            Comprehensive End-to-End Introduction to SAP HANA

This interactive full-day, preconference seminar will provide a comprehensive introduction to all the capabilities of the SAP HANA platform. This session will provide overviews of all the capabilities of SAP HANA supported by demos, customer use cases, adoption patterns, and tips to integrate them into your application and technology deployments. Areas to be covered include: Modeling; administration; application development; developer tools; text analysis; search; predictive; security; data virtualization and federation; geo-spatial; data acquisition and replication; landscape architecture; HA/DR; SAP BW on SAP HANA / Business Suite on SAP HANA (including migration); cloud deployments; and Hadoop integration, to name a few. Each section will be led by the PM owner for this area and will focus on what each capability brings to the project manager, architect, administrator, security admin, BI Analyst, developer, and project leads.

 

Sign Up or Find out More

Please check out the ASUG Preconference Seminar page to learn more.

SAP HANA on AWS and LUMIRA: My First Experience

$
0
0

It's 10 PM and I'm at the office working on a proof of concept for a new project. In this project we are using SAP HANA to store big data and Lumira to report and visualize this data near real-time. Since this is a proof of concept I didn't wanted to use a lot of resource of consultants or systems. So I decided to do the proof of concept with a colleague (I'm an integration expert focused on SAP PI and have not much knowledge about BI) and use tools which will help to get a quick result.

 

In the end we were able to finish the proof of concept in a couple (2-3) business days. I was impressed how easy it was to learn and use HANA and Lumira. Now I want to share our experience with you and encourage you to try and learn. Do it just for fun.

 

http://www.sap.com/pc/tech/in-memory-computing-hana/software/platform/cloud/_jcr_content/sublevelfeature/image.sapimage.jpg/1368555969183.jpg

 

Before starting we had to clarify which tools we had to use. And the winners were:

  • SAP HANA on Amazon Web Services
  • SAP Lumira on desktop (Cloud version wasn't enough since I was not able to connect to an online HANA system and show real-time data )

 

 

Since I have very little knowledge about any of these tools I started with SAP HANA Academy and went through the videos one by one and completed almost all of them. ( SAP HANA Academy | SAP HANA) This was a treasure. I was able to learn fast and repeat all steps on my own HANA system on AWS. This was helpful since I was able to practice at the same time.

 

Recommendation #1: To learn about HANA start with the HANA Academy.

 

After that part we checked out SAP Lumira Cloud. To see how it worked we prepared a sample excel file with 30 sample rows and tried to create some visualizations. It was easier then we thought. You prepare your data and by just dragging and dropping you are able to create incredible reports. No complex cubes, or universes or other things. (just what we needed)

 

Recommendation #2: Lumira Cloud is a great place to learn Lumira.

 

Next step was to connect these two. I struggled a bit at the beginning but using some google engineering it didn't took long.

 

After starting our HANA instance on AWS I created a schema, created my table and put some data in it. (awesome import functionality provided by HANA) Next we downloaded and installed Lumira on desktop, enabled 30 day trial to use standard edition and not personal edition. Connected Lumira to HANA (you just provide your IP, instance ID and username password) and voila.


Retrieving the data from HANA into Lumira and reporting it required some google and SCN engineering. But working some hours on it you can figure it out easily. Also there are some great blogs on SCN.

 

Recommendation #3: You can find a lot of help and resources on SCN.

 

After creating some sample reports we finished our proof of concept. Everything was in place.

 

Wow.

 

From PI to BI in 3 day Who would have thought.

 

It was fun, encouraging and an awesome experience. Anyone interested in learning HANA, my advice "just go for it". Be careful when using HANA on AWS, because you pay almost $4 for one hour. So don't forget to stop your instance after you stop working. Don't worry about stopping it because when you start your instance again, HANA will be up and running and all your data will be in place. It takes 5 minutes maximum.

 

Recommendation #4: Start now and have fun.

 

PS: Things I'm looking for or got stuck at:

  • Looking for: Lumira Automatic refresh. (retrieving data from HANA in small time intervals)
  • Got stuck at: Had no idea how to use the "Create a geographic hierarchy - By Latitude / Longitude". It is grayed out/disabled and can't be used. Also searched on SCN but couldn't find any help.
  • Looking for: Connecting Lumira Cloud to my HANA.
  • Looking for: Lumira Server
  • ... I'm tired... looking for a warm bed right now

How To...Calculate YTD-MTD-anyTD using Date Dimensions

$
0
0

Requirement

It is a very common requirement to view various “slices” of data based on different time criteria on the same row in a report or analysis. “Show me current year to date vs. prior year to date sales figures for all sales orgs”. This can be accomplished in a number of ways, many of which have been explored in the following blog posts.

 

Implementation of WTD, MTD, YTD in HANA using Input Parameters only

Implementation of WTD, MTD, YTD Period Reporting in HANA using Calculated Columns in Projection

Implementation of WTD, MTD, YTD in HANA using Script based Calc View calling Graphical Calc view

Applying YTD in SAP HANA with SAP BO Analysis Office

 

All of these approaches have one thing in common – they all rely on generation of the time selection criteria at runtime. This is accomplished through either an input parameter/variable from a user or dynamically determining it at runtime of the view in the case of the scripted view.

 

While effective, this is not really reusable in the sense of a "conformed" DW dimension. Developers may be interpreting the definitions differently across many of the views that start springing up in the system. For example, with “Current Year To Date”, does this include all data leading up to the current day or does it only include data up to the last closed month? Or maybe the last closed week? Two developers may interpret the same semantics slightly differently and users (and developers) may become confused.


Process Approach Overview

Why don’t we predetermine the time slice definitions as attributes physically in the data so everyone can use them?!

 

I would draw a similarity with cooking popcorn in the microwave. Now, I could read the instructions and manually choose how long I should cook my popcorn OR I can press one button and have that time determined for me without having to provide anything. This is one of key points (for me at least) in BI application design – don’t make your users have to think too hard to get the answers they need. Now, I never said your popcorn won’t burn, and I’d like to think the HANA approach is a *little* more precise, but you get the idea of the ‘easy’ button I am trying to describe here.

 

In many of the previous DW/BI projects I have been involved in, the date dimensions play a large role in helping to solve this type of “time slice” requirement. Here, a given member of the dimension can have many different flags or indicators present, which when joined to the fact table, can act as filters on your data.

 

This would replace the need to determine the filters at run time since they are already available to you for use in joins.

 

The added benefits are:

  • These rules or definitions will be reused across all solutions, therefore imposing some level of data consistency in the DW and end users or content creators can get a consistent experience.
  • By making these rules accessible in a dimension, the options for solutions become more flexible as we can now model directly in the analytic view without the need for a Calculation view.
  • We never have to be concerned about ‘filter pushdown’ since we are relying on the joins in the Analytic views to work for us at the lowest level possible.
  • You can introduce as many different filters as you need to, YTD, MTD, QTD, WTD including fiscal definitions if required. You just need some logic and a new column on your target dimension.

 

I have to give credit to a colleague of mine Greg Cantwell who put together the production version that this demonstration is built off of.


Happy HANA!

Justin


HANA Development

This solution consists of the following components

 

  1. Date dimension table based a given granularity (day, month, fiscal period, etc) of your choice.  In my demo I will be using the monthly grain. HANA provides functions to create time data, for which you can generate and then either modify tables directly or move to another schema. See Appendix 1 for details.
  2. A fact table containing the grain chosen in step 1, monthly in this example.
  3. A SQL script (or other method) that can update the date dimension table with the “current state” per business definitions. This would execute at some predefined schedule to ensure that the dimension “moves up” as time changes.
  4. An attribute view that wraps the table from step 1.
  5. An Analytic view that wraps the table from step 1 and includes the attribute view in step 4. Here I will also illustrate how the “time slices” can be baked into the analytic view without using a Calculation view.
  6. A graphical Calculation view that consumes the analytic view.
  7. Performance testing in Appendix 2 for Calc View vs. Analytic View.

 

 

Date dimension table. See attached file for insert statements.

 

CREATECOLUMNTABLE"MOLJUS02"."MONTH_DIM"

    ("YEAR"NVARCHAR(4),

        "YEAR_MONTH"NVARCHAR(6),

        "QUARTER"NVARCHAR(2),

        "MONTH"NVARCHAR(2),

        "YTD"NVARCHAR(1),

        "CY"NVARCHAR(1),

        "PY"NVARCHAR(1),

        PRIMARYKEY ("YEAR",

        "MONTH")) UNLOAD PRIORITY 0 AUTO MERGE;

 

A fact table containing the grain chosen in step 1, monthly in this example.  See attached file for insert statements.

 

CREATECOLUMNTABLE"MOLJUS02"."FACT_TABLE"

    ("YEARMONTH"NVARCHAR(6),

        "MATERIAL"NVARCHAR(18),

        "CUSTOMER"NVARCHAR(10),

        "SALES"DECIMAL(15,2),

        "COST"DECIMAL(15,2),

        PRIMARYKEY ("YEARMONTH",

        "MATERIAL", "CUSTOMER")) UNLOAD PRIORITY 0 AUTO MERGE;

 

A SQL script (or other method) that can update the date dimension table with the “current state” per definitions. Here I will just show the SQL, you can choose how to implement.

 

UPDATE"MOLJUS02"."MONTH_DIM"SET

"YTD" = casewhenmonth(current_date) >= "MONTH"

       then'Y'else'N'end,

       "CY" = casewhenyear(current_date) = "YEAR"

       then'Y'else'N'end,

       "PY" = casewhenyear(current_date)-1 = "YEAR"

then'Y'else'N'end;

 

For each record in the date dimension, we are using three different attributes to help us determine the time slices.

 

Column “YTD” (Y/N) tells us whether the month falls within the months that have already passed in the year, INCLUDING the current month. So if we are in March (03), this will flag months 01, 02 and 03 across ALL years that exist in the dimension.

 

Column “CY” (Y/N) tells us whether the month falls within the current year.

 

Column “PY” (Y/N) tells us whether the month falls within the previous year.

 

An attribute view that wraps the table from step 1.

 

An Analytic view that wraps the table from step 1 and includes the attribute view in step 4. Here I will also illustrate how the “time slices” can be baked into the analytic view without using a Calculation view.


The join is on YEAR/MONTH (201301) and is a 1:N from the attribute view to the fact table.

 

Now that we have the date dimension hooked up to a fact table, we can now run some queries that help illustrate how these flags can be used. Now notice that we don’t have to pass in any filter criteria for selecting the times slices, that logic is already done for us.

 

All of the data as shown through data preview, notice how the three flags tell us the story.


This query will show us the first three months that have already passed in the year, against the same months in all years contained in the fact.

SELECT"YEAR", SUM("SALES")

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT"

WHERE"YTD" = 'Y'


 

This query will show us all the sales by material for the current year to date

SELECT"MATERIAL", SUM("SALES") AS"CURRENT_YEAR_SALES"

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT"

WHERE"YTD" = 'Y'

AND"CY" = 'Y'

GROUPBY"MATERIAL"


 

This query will show us all the sales by material for the current year to date against the previous year to date.

SELECT"MATERIAL", SUM("CURRENT_YEAR_SALES"), SUM("PREVIOUS_YEAR_SALES")

FROM (

SELECT"MATERIAL", SUM("SALES") AS"CURRENT_YEAR_SALES", 0 AS"PREVIOUS_YEAR_SALES"

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT"

WHERE"YTD" = 'Y'

AND"CY" = 'Y'

GROUPBY"MATERIAL"

UNION

SELECT"MATERIAL", 0 AS"CURRENT_YEAR_SALES", SUM("SALES") AS"PREVIOUS_YEAR_SALES"

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT"

WHERE"YTD" = 'Y'

AND"PY" = 'Y'

GROUPBY"MATERIAL")

GROUPBY"MATERIAL"



Model the time slices as restricted measures in the analytic view. Now that we have these flags available, we can use them to define restricted measures.



Data Preview on Analytic View


A graphical Calculation view that consumes the analytic view and performs a UNION.

 

Projections on Analytic views, including the filters on time slice attributes.




 



 



 

Now that we have the CY/PY measures, we can perform variance and percentage calculations as calculated columns.

 

Data Preview

 

Performance Considerations with above approach, Analytic View vs. Calculation View

I created a similar scenario as above, but on a fact table with much larger data. I saw approximately 10 million records per a given month. The goal was to measure the performance different between implementing the time slice in an Analytic View restricted measure vs. the Calculation View Unions.

 

Conclusion

Both methods result in similar runtimes. AV performs *slightly* better (6-7% faster) on total aggregation (no additional columns), and CV definitely performs better (from 15 - 30% faster) with the more detailed dataset. Thus, there is a tradeoff between performance and maintenance/reusability by moving to a Calculation view. More than anything, it proves both methods are acceptable from a performance perspective, but generally if you need more detailed resultset, Calc view offers you better performance.

 

Analytic View

--Aggregated

SELECTSUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT_WDATA"


 

Average Run Time is 375ms

Statement 'SELECT SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), ...'

successfully executed in 374 ms 683 µs  (server processing time: 370 ms 997 µs)

Fetched 1 row(s) in 0 ms 5 µs (server processing time: 0 ms 0 µs)

 

Add more detail

--Add Customer

SELECT"KUNWE", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT_WDATA"

GROUPBY"KUNWE"

Average Run Time is 1.375ms

Statement 'SELECT "KUNWE", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), ...'

successfully executed in 1.371 seconds  (server processing time: 1.368 seconds)

Fetched 5000 row(s) in 363 ms 267 µs (server processing time: 7 ms 35 µs)

Result limited to 5000 row(s) due to value in Result Preferences

 

Add even more detail

--Add Customer/Material

SELECT"KUNWE", "MATNR", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/AN_TEST_FACT_WDATA"

GROUPBY"KUNWE", "MATNR"

 

Average Run Time is 1.820 seconds

Statement 'SELECT "KUNWE", "MATNR", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), ...'

successfully executed in 1.815 seconds  (server processing time: 1.811 seconds)

Fetched 5000 row(s) in 241 ms 143 µs (server processing time: 10 ms 11 µs)

Result limited to 5000 row(s) due to value in Result Preferences

 

Very important to note, is that by modeling this way, we stay entirely in the OLAP engine and the Calc engine is never invoked.

 

 

Calculation View

--Aggregated

SELECTSUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/CV_TEST_FACT_WDATA"

 

 

Average Run Time is 410ms

Statement 'SELECT SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), ...'

successfully executed in 400 ms 713 µs  (server processing time: 396 ms 866 µs)

Fetched 1 row(s) in 0 ms 4 µs (server processing time: 0 ms 0 µs)

 

--Add Customer

SELECT"KUNWE", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/CV_TEST_FACT_WDATA"

GROUPBY"KUNWE"

 

Average Run Time is 950ms

Statement 'SELECT "KUNWE", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), ...'

successfully executed in 975 ms  (server processing time: 1.306 seconds)

Fetched 5000 row(s) in 266 ms 153 µs (server processing time: 6 ms 127 µs)

Result limited to 5000 row(s) due to value in Result Preferences

Add even more detail


--Add Customer/Material

SELECT"KUNWE", "MATNR", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), SUM("PRIOR_YEAR_SALES"), SUM("PRIOR_YEAR_COST"), COUNT(*)

FROM"_SYS_BIC"."sandbox.justin.Date_Dim/CV_TEST_FACT_WDATA"

GROUPBY"KUNWE", "MATNR"

Average Run Time is 1.52 seconds

Statement 'SELECT "KUNWE", "MATNR", SUM("CURRENT_YEAR_SALES"), SUM("CURRENT_YEAR_COST"), ...'

successfully executed in 1.518 seconds  (server processing time: 1.514 seconds)

Fetched 5000 row(s) in 279 ms 965 µs (server processing time: 6 ms 128 µs)

Result limited to 5000 row(s) due to value in Result Preferences

 

Here, we can clearly see that the Calc engine is invoked, which is expected



Appendix 1 – generate time data in HANA

In HANA, we can natively generate the Gregorian time dimensions we need for the above type analysis. You can either modify the table directly in the _SYS_BI schema OR copy it out to an application schema with data and modify.

 

Generate Time Data - Modeler View, "Data" pane

 

Choose grain

 

Schema _SYS_BI now has the data in the relevant table

 

You can now copy the data to another schema and alter to add the attributes you wish

CREATECOLUMNTABLE"SCHEMA"."TABLE"LIKE"_SYS_BI"."M_TIME_DIMENSION_MONTH"WITH DATA;

ALTERTABLE....

Survey on the Usage of Cloud based Business Scenarios and their Integration

$
0
0

What do companies think about running cloud based business scenarios and integrating the same on the cloud?

 

In today's fast and global growing world customers have more than ever the challenge to respond promptly to market developments and new trends. In 2014 one of these trend topics is definitely cloud computing. In particular the usage of business process-related services in the cloud is an emerging scenario for future business process platforms. The associated integration of these services into existing processes and IT landscapes remains a challenge.

 

Despite many advantages of a flexible and virtually unlimited availability of services from the cloud, many companies still have concerns to promote the advantages of using cloud computing. What are the reasons for this? To what extend must cloud providers change their service portfolio to make cloud services more attractive for companies? What experiences have companies been made with cloud computing? For which business scenarios do customers demand a cloud based solution?

 

The answer to this and other questions all around the cloud will be answered in a scientific study carried out by the itelligence AG, SAP and the University of Paderborn. With this survey we want to give a comprehensive overview regarding to requirements, knowledge, acceptance, complexity and safety concerns of "services from the cloud". A meaningful study requires your answers as a base. Therefore we kindly ask you to invest 5 - 10 minutes of your time to participate in our online survey at http://cloudsurvey.itelligence.de/s3/en.

 

Taking part is worthwhile!

 

All participants receive a free copy of the results of this study. And as a "Thank You" all participants will also go into a draw for a vouchers. More information, kindly join the survey at via the link above .

Learning Live: My experience with SAP HANA Academy Live2

$
0
0

This past week I had the distinct pleasure of deeply diving into many of SAP HANA’s capabilities during a three-day SAP HANA Academy Live boot camp. The Big Dig demo room in SAP’s Burlington, Massachusetts office was packed with participants eager to explore their own SAP HANA sandbox with guided tutorials from the SAP HANA Academy team. From connecting to SAP HANA Studio to exploring data in SAP Lumira and SAP Predictive Analysis, the vast amount of useful information the SAP HANA Academy team taught during their SAP HANA Live2 Project provided a strong foundation that will enable me to start harnessing the power of SAP HANA on my own.

 

On day one Tahir Hussain “Bob” Babar detailed how the Windows box that contained SAP HANA Studio connected to the Linux box. Bob explained the differences between all of the various instances and port numbers necessary to run SAP HANA and also the differences between SAP HANA Studio's various views. Bob demonstrated how to create new users with distinguished rights and walked us through how to load data into SAP HANA.

IMG_1521.JPG

 

That afternoon I really enjoyed learning the about how to run Lingua Analysis and Sentiment Analysis on a set of social media data. The ability to quickly categorizing the different sentiments inlaid within the social media dataset was a powerful example of a very useful application of SAP HANA’s speed.  Learn more about Sentiment Analysis by watching this video.

 

On Day Two Phillip Mugglestone offered a detailed explanation on how Link Predictions works and then walked us through how to create a Link Prediction project in SAP HANA Studio. I founded the multilayered structure of the Link Prediction fascinating and have enjoyed expanding my knowledge of Link Prediction with the videos available from the SAP HANA Academy.

 

IMG_1524.JPG

 

Day three started with an exhibition by Bob on how to load data from SAP HANA into SAP Lumira. Bob then showcased how to create various engaging visualizations and storyboards in SAP Lumira. I picked up some excellent tips about visualizing my data in SAP Lumira such as how to filter data using the sorting feature. Bob offers many useful tips and insights about SAP Lumira in his HANA Live2: Creating an Analytic View for Data Visualizations tutorial video.

 

In the afternoon of day three Philip presented SAP Predictive Analysis by showcasing how to build a complex process with various algorithms.  Philip illumined the many ways data can be analyzed and visualized with PAL algorithms. Finally, Philip showed how to easily write the data from SAP Predicative Analysis into SAP HANA so it can be analyzed in other applications such as SAP Lumira. Even more information about SAP Predictive Analysis is available here.

 

SAP HANA Academy Live was an engaging educational experience that I would highly recommend to anyone desiring a hands-on exposure to the capabilities of SAP HANA. All of the information covered during the SAP HANA Academy Live boot camp is available in free video tutorials at the SAP HANA Academy.

 

View all of the SAP HANA Live2 Project tutorials at the SAP HANA Academy.


SAP HANA Academy - over 500 free tutorial technical videos on using SAP HANA.


-Tom Flanagan

SAP HANA Academy

Follow @saphanaacademy

SAP HANA Nirvana Part 3: The Inside Story of Deloitte's Third Year of SAP HANA Implementation

$
0
0

Discover what the world's largest professional services firm is able to achieve with SAP HANA at our March 18 Webcast

 

Date: Tuesday, March 18, 2014

Time: 11:00 am EST

Speaker: Christopher Dinkell, IT Leader - Analytics Reporting Studio, Deloitte

 

Please join us on March 18 for the next session of our Webcast series, “SAP HANA Customer Spotlight.”

 

Our returning guest is Christopher Dinkel, IT leader for Analytics and Reporting Studio, Deloitte. Mr. Dinkel will share uncensored, real-world lessons from Deloitte’s own self-directed SAP HANA implementation.

 

Over the course of three years, the goals of Deloitte’s implementation program have been to:

  • Reduce data-load time from hours to seconds
  • Improve report run-time from hours or minutes to milliseconds
  • Streamline business processes along the way

http://w.on24.com/r.htm?e=763792&s=1&k=A7EBB53B2F866F4A2D570B4ED9F79CB4

Register now to explore how Deloitte is transforming the way it uses analytics to make the “undoable” doable.

 

We hope you take this opportunity to explore our customers’ firsthand experiences with SAP HANA. Visit the “SAP HANA Customer Spotlight” series’ home page for a complete listing of upcoming and on-demand events.

 

 

Register Today!

Scenario Where a Graphical Calculation View will Not Suffice

$
0
0

In my last HANA project, I had a requirement to build a HANA model for the supply chain delivery performance report.

 

I was able to incorporate most of the business logic required for the report in the graphical calculation view but there was requirement where I had to incorporate logic to calculate the number of working days between two specific dates excluding weekends ,for which ,a graphical calculation view wouldn't suffice .

 

For this requirement, I had to build a scripted based calculation view as a wrapper on top of the graphical calculation view to incorporate the required logic.

 

The below SQL snippet is written in the the scripted based calculation view  to calculate the number of working days between the actual and promise ship dates excluding weekends.

 

(SELECT CAST((ABS(DAYS_BETWEEN(TO_DATE(ACTUAL_SHIP_DATE),TO_DATE(PROMISE_SHIP_DATE))) / 7) AS INTEGER) * 5 +

  MOD(ABS(DAYS_BETWEEN(TO_DATE(ACTUAL_SHIP_DATE), TO_DATE(PROMISE_SHIP_DATE))), 7) - (

        SELECT COUNT(*)  FROM (

SELECT 1 AS d FROM DUMMY UNION ALL

SELECT 2 FROM DUMMY UNION ALL

SELECT 3 FROM DUMMY UNION ALL

SELECT 4 FROM DUMMY UNION ALL

SELECT 5 FROM DUMMY UNION ALL

SELECT 6 FROM DUMMY UNION ALL

SELECT 7 FROM DUMMY

) AS weekdays

        WHERE

       d < MOD(ABS(DAYS_BETWEEN(TO_DATE(ACTUAL_SHIP_DATE),TO_DATE(PROMISE_SHIP_DATE))), 7) AND

DAYNAME(ADD_DAYS(GREATEST(TO_DATE(ACTUAL_SHIP_DATE), TO_DATE(PROMISE_SHIP_DATE)), -d)) IN ('SATURDAY', 'SUNDAY')) FROM DUMMY) as PROM_ACT_SHIP_DAYS

 

Questions/Comments are welcome.

 

- Goutham

How much does a SAP HANA appliance really cost?

$
0
0

There has been some talk this week around SAP HANA Hardware Pricing. SAP published a price list which is now only available on Twitter. The HANA hardware price list page is down for now and should be up soon.

 

The original pricing showed $17k for a 512GB appliance, but that wasn't a certified appliance as it uses the Intel E5 platform, and this isn't supported by SAP. Interestingly the Amazon AWS EC2 appliance that runs HANA One, does in fact run on Intel E5 CPUs on the Xen virtualization platform. Ironically, HANA One is 40% faster per core than an equivalent Intel E7 CPU, because it is a newer generation, but we digress.

 

It also shows $55k for a 1TB appliance, which I think is a realistic price for an Intel E7 v2 appliance, but this hasn't been certified yet by SAP. It won't be long before this happens, however, which is very exciting news!

 

I thought I'd do a bit of primary research and see what I could buy a 512GB HANA system at from the internet. There are some reasons why HANA hardware has some cost associated with it.

 

- HANA requires Intel E7 CPUs, which cost over $4000 each. A 512GB HANA system requires 4 of these, so $16000

- HANA requires SSD Storage from Fusion IO in most cases, which costs $9500 for a 512GB system

- HANA requires SuSe Linux, which is $6000 for 3 years including support

- 512GB RAM costs around $7500

 

Add these up and we're at $39k before we start, for the main components in a 512GB system.

 

So, I went to dell.com and looked to see what I could buy a HANA system for. Here's what I came up with based on the information available in the SAP Product Availability Matrix for SAP HANA and the Dell SAP HANA appliances I have seen in the field. I might have a few small mistakes but it is close.

 

ComponentPrice (512GB)Price (256GB)
Base Price for Dell R910 Server$8935.00$8935.00
16-Drive Chassis$374.88$374.88
Upgrade to 2x E7-4870 CPU$6342.86$6342.86
Upgrade to 4x E7-4870 CPU$8261.47N/A
Upgrade to 512GB RAM$7459.99$3729.99
Upgrade to 3 year Mission Critical Support$1499.49$1499.49
Upgrade to 10 300GB SAS Disks$2241.70$2241.70
SuSe Enterprise Linux 3 year subscription$5597.23$5597.23
High Output PSU$448.35$448.35
785GB FusionIO ioDrive$9371.10$9371.10
iDRAC Enterprise$261.66$261.66
Total$66383.00N/A
Online Discount-$16612.48N/A
Grand Total$49770.52$38802.26

 

Yes, the numbers don't quite add up because Dell put the "instant savings" into the line item prices.

 

Note that this is an online price and Dell might discount it further if you are a good customer. When I used to buy Dell equipment a previous role, I'd expect to pay a piece less than the online price, but I'm no longer a buyer so take this with a pinch of salt.

 

I also took a look at street prices online, and I can see that Dell's line level prices for CPU and RAM are around 10% above street prices, which would suggest that an additional 10% discount should be easy to negotiate.

 

Also do note that I've taken a simple single-node SAP HANA appliance. If you are using a more complex appliance that has a scale-out configuration then expect to pay more per TB. I'm not going to try to build out the configuration for one of those right now because they require many more parts including shared storage and interconnects.

 

I have seen some more worrying things happen (not from Dell), like vendors claiming that HANA has special "parts". This is nonsense - SAP HANA systems contain are high-end commodity parts and there is no secret sauce. One exception is IBM, which even for a single node uses their proprietary GPFS filesystem, which requires a license.

 

But if you're asking for a quote from Dell for a 512GB appliance, then expect to pay no more than $50k. I'd be interested to see what you're seeing as Dell customers. Since all the other hardware providers use much the same components from the same suppliers, then this should be a decent benchmark.

 

What about in the cloud?

 

SAP now have Infrastructure as a Service pricing for $3595 a month for the same 40-core 512GB box and $6495 a month for an 80-core 1TB box. I suspect this is pretty compelling if you want to get moving quickly and favor subscription pricing over capitalizing. And if you don't want to support the infrastructure yourself, of course.

 

Is anything changing in 2014?

 

Well one thing is for sure, memory and FusionIO prices have come down a long way since HANA started. When I first started, the FusionIO would cost $40k and RAM would cost the same again.

 

In 2014, we will see the advent of the Intel E7 v2 CPU. This will have more cores having more power, meaning SAP will probably certify 1TB RAM for a 4-socket system. This means that we should be able to buy a 1TB HANA node for $50k list price very soon. Good things are happening in HANA land!

How much does a SAP HANA appliance really cost? Part 2, IBM & Ivy Bridge

$
0
0

The last blog I wrote, How much does a SAP HANA appliance really cost?, seemed to pique people's interest. However, there were two really big questions that came for me out of it: what about the world's best selling HANA appliance, the IBM X5, and what about the new SAP Certified Appliance Hardware for SAP HANA Ivy Bridge appliances based on the IBM X6?

 

So, I painstakingly went through the IBM specification sheets for the IBM X5 appliance, and pieced together what precise IBM components would be required, and what the list price would be. Then, I went through the internet and found street prices for each component, which should give you a feeling for your negotiating power. Here's what it looks like for 256GB (2 Socket, Size S) and 521GB (4 Socket, Size M) appliances:

 

 

Description

Part

256GB List

256GB Street

512GB List

512GB Street

Base System

7143C3U

26699

17602.12

26699

17602.12

CPUs

69Y1899

0

0

16698

15611.98

RAM (16GB)

49Y1400

17184

4607.84

34368

9215.68

785GB Flash

46C9081

13499

6499

13499

6499

8x600GB Disk

49Y2003

6632

2120

6632

2120

M5015 RAID

46M0829

749

379.95

749

379.95

4x 1G Ethernet

49Y4240

529

285

529

285

SuSe Linux

00D8096

7250

6224

14499

12448.06

2x Emulext 10G

49Y7950

1258

483.84

1258

483.84

Total

 

$73800

$38201.75

$114931

$64645.63

 

 

So IBM list pricing is much higher than Dell - nearly double, but the discount you can expect is much heavier too. Note that this does not include GPFS licensing, but for a single node, I don't see what GPFS brings.

 

Figuring this out for IBM X6 was much harder, and I may have made some small mistakes - please correct me if you see any. Ivy Bridge appliances only require 500GB of flash, and 3x RAM for snapshot disk, so they should be relatively cheaper. Plus, the IBM X6 appliance uses the flash storage in DIMM slots, which is very innovative. It is a very cool looking appliance. Here's what the pricing looks like:

 

Description

Part

512GB List

512GB Street

1TB List

1TB Street

Base System

3837C4U

28349

25934.58

28349

25934.58

CPUs

44X3996

0

0

20918

20313.1

RAM (8GB)

00D5036

12736

7036.8

0

0

RAM (16GB)

46W0672

0

0

22720

12476.8

4x200GB Flash

00FE000

12316

12123

12316

12123

4x1.2TB

00AJ146

3956

3928.04

3956

3928.04

SuSe Linux

00D8096

7250

6224

14499

12448.06

Total

 

64607

55246.42

102758

87223.58

 

This is fascinating. At list price, a 256GB X5 system is the same price as a 512GB X6, and a 512GB X5 is even more expensive than the 1TB X6. This is notionally because the main cost in a system is the number of CPUs, and IvyBridge is 2x as powerful per CPU.

 

However, the discounts available for IvyBridge hardware are a fraction of what is available for Westmere hardware - whilst we can expect nearly a 50% discount off list for X5, I was seeing just a 15% discount for X6. I'm certain that this will change once the components are more readily available.

 

Conclusions

 

I took a few things away from this - first, you can buy a 512GB Westmere appliance from Dell online for $50k, whilst IBM's street price is $65k. This probably means that my street prices can be negotiated further down by a savvy buyer.

 

And second, the $50k 1TB system that I predicted this year is not here yet, at least from IBM. But, given the $100k list price from IBM, once components settle down, we should definitely expect $50k from other vendors in 2014 - this should be your price target as a buyer. Once Huawei properly enters the US and EMEA markets, things will get very interesting because their FusionCube is clearly a very innovative system and may be much lower cost to build.

 

My third point is that these prices are for Data Mart or BW on HANA appliances. For Business Suite on HANA, we are allowed up to 1.5TB RAM per CPU (3x more). I need to price these out but they will be very cost-effective for up to 6TB!

 

My last point I'd like to make is that most systems I'm working on with HANA are much bigger than this - multi-node, scale out systems with 5-20TB RAM. Please don't expect pricing to increase linearly for much bigger systems, because they require shared storage and much more expensive, 10-40GB/sec interconnects and networking.


How to derive Year & Month from Timestamp in an Analytic View

$
0
0

This document describes how to extract Year and Month details from Timestamp fields in an analytic view.

 

I saw a question posted here  http://scn.sap.com/thread/3520511 asking for the same and tried it out but saw that the standard YEAR() function that I used to use for extracting year from timestamp details was not working when I was defining my new calculated column.

 

It appears that only the functions provided there can be used. So this document desctibes how to extract Year and Month from a timestamp using the limited capability of Analytic view’s calculation functions.

 

So in my scenario here, I created a simple analytical view with 3 fields. The important field here is TIMESTAMP which holds the value where I need to get year and month from.

 

1.jpg

 

 

A quick data preview shows how data looks like for this view.

 

2.jpg

 

 

 

So first thing I do here is to convert this timestamp value to a string.

 

To do this, I define a new column TS_STRING and use the string() function as shown below:

 

3.JPG

 

 

 

Save activate and do a data preview. You will get the below output.

 

Notice that the new field shows the timestamp value converted to a string type.

 

What this allows us to do now is use the existing string functions to extract year and month from this field. Notice that the year can
be derived from the first 4 characters of this field and the month can be extracted from the 6th and 7th character.

 

4.JPG

 

 

Use the leftstr() function to extract the first 4 string literals from TS_STRING. Define as below:

 

5.JPG

 

 

 

Now, use the midstr() function to extract month from the 6th character to the next 2 characters, i.e 6th and 7th character

 

Capture.JPG6.JPG

 

 

Now you get the required output.

 

7.JPG

 

 

Again, you can avoid creating a separate TS_STRING and combine the string conversion and substring functions in the YEAR and MONTH
field definitions itself.

 

I just created a separate field for more clarity.

 

Do let me know if there are better ways to do this.

 

 

 

Cheers!!!

 

 

Shyam Uthaman

SAP HANA Content Security Roles Setup

$
0
0

A few months ago I was given a task to implement content security in SAP HANA. The main purpose for this task was to provide Business user access to information models created in SAP HANA. For example Finance user should only view finance package and can access information models in that package via BI tools such as Analysis for excel.

So, after a research and few discussions with various people I came up with following security model.

 

12-03-2014 9-57-57 PM.jpg

Let's assume that content is maintained in following structure:

 

12-03-2014 10-02-11 PM.jpg

 

 

So based on each type of privilege I created the roles as shown below:

System Privilege Roles

These roles are mainly needed for System admin tasks (technical role)

X_HNS = S for System Privilege role

                                                                            

Role  

Name

Privilege

Type

Assigned

Privileges

X_HNS_USERADMIN

 

This role  can create users, change their password and delete users

System

Privilege

USER ADMIN

DATA ADMIN

X_HNS_ROLEADMIN

 

This role can  create roles, alter roles and drop roles with SQL commands1

System

Privilege

ROLE ADMIN

X_HNS_SYSADMIN

 

This roles  can administer HANA system, alter system parameters and execute ALTER  commands to change system

System

Privilege

INIFILE ADMIN

LICENSE ADMIN

LOG  ADMIN

SERVICE ADMIN

SESSION ADMIN

TRACE ADMIN

AUDIT ADMIN   

X_HNS_SYSMON

 

This role can  enable trace, auditing and manage logs to monitor system

System

Privilege

CATALOG READ

MONITOR ADMIN

X_HNS_CONTENTADMIN

 

This role can  create, alter, import, export and drop content.

System

Privilege

CREATE SCENARIO

CREATE STRUCTURED PRIVILEGE

  1. REPO.EXPORT
  2. REPO.IMPORT
  3. REPO.MAINTAIN_DELIVERY_UNITS
  4. REPO.WORK_IN_FOREIGN_WORKSPACE

STUCTUREDPRIVILEGE ADMIN

X_HNS_DATAADMIN

 

This role can  create schema, import and export tables and drop tables

System

Privilege

CATALOG READ

CREATE REMOTE SOURCE

CREATE SCHEMA

IMPORT

EXPORT

Object Privilege Roles

X_HNO = O for Object Privilege Role

                                                                                                                                                                    

Role  

Name

Privilege

Type

Assigned

Privileges

X_HNO_CONTENT_READ

 

This role  give read access to activated views

Object

Privilege

_SYS_BI (SELECT, EXECUTE)

You would only need this _SYS_BIC (SELECT, EXECUTE) if you are using HANA studio to access views. Not using this for BI tools provides more security in terms of displaying activated views. Access to SYS_BIC will provide access to all activated views and therefore this model will be invalid. We can create separate role for this privilege

X_HNO_CONTENT_WRITE

 

This role  give write access for activated views and read access to schema

Object

Privilege

_SYS_BI (EXECUTE, SELECT, INSERT, UPDATE, DELETE)

_SYS_BIC (CREATE ANY, ALTER, DROP, EXECUTE, SELECT, INSERT, UPDATE,  DELETE, INDEX)

X_HNO_CONTENT_LIST

Object

Privilege

REPOSITORY_REST (EXECUTE)

X_HNO_SCHEMA_READ

Object

Privilege

SCHEMA (SELECT)

X_HNO_SCHEMA_WRITE

Object

Privilege

SCHEMA (CREATE ANY, ALTER, DROP, EXECUTE, SELECT,  INSERT, UPDATE, DELETE, INDEX)

X_HNO_FI_CONTENT

Object

Privilege

_SYS_BIC.FI Column Views

X_HNO_CO_CONTENT

Object

Privilege

_SYS_BIC.CO Column Views

X_HNO_IM_CONTENT

Object

Privilege

_SYS_BIC.IM Column Views

X_HNO_LE_CONTENT

Object

Privilege

_SYS_BIC.LE Column Views

X_HNO_MM_CONTENT

Object

Privilege

_SYS_BIC.MM Column Views

X_HNO_PA_CONTENT

Object Privilege

_SYS_BIC.PA Column Views

X_HNO_PU_CONTENT

Object Privilege

_SYS_BIC.PU Column Views

X_HNO_SD_CONTENT

Object Privilege

_SYS_BIC.SD Column Views

X_HNO_SP_CONTENT

Object Privilege

_SYS_BIC.SP Column Views

Package Privilege Roles

                                                                                                                                                         

Role  

Name

Privilege

Type

Assigned

Privileges

X_HNP_FI_READ

 

This role  give read access to Package FI

Package

Privilege

  1. REPO.READ on FI

X_HNP_IM_READ

 

This role  give read access to Package IM

Package

Privilege

  1. REPO.READ on IM

 

X_HNP_LE_READ

 

This role  give read access to Package LE

Package

Privilege

  1. REPO.READ on LE

 

X_HNP_MM_READ

 

This role  give read access to Package MM

Package

Privilege

  1. REPO.READ on MM

 

X_HNP_PP_READ

 

This role  give read access to Package PP

Package

Privilege

  1. REPO.READ on PP

 

X_HNP_PU_READ

 

This role  give read access to Package PU

Package

Privilege

  1. REPO.READ on PU

 

X_HNP_SD_READ

 

This role  give read access to Package SD

Package

Privilege

  1. REPO.READ on SD

 

X_HNP_SP_READ

 

This role  give read access to Package SP

Package

Privilege

  1. REPO.READ on SP

X_HNP_CO_READ

 

This role  give read access to Package CO

Package

Privilege

  1. REPO.READ on CO

X_HNP_PA_READ

 

This role  give read access to Package PA

Package

Privilege

  1. REPO.READ on PA

X_HNP_ROOT_WRITE

 

This role  give edit access to ALL Packages

Package

Privilege

  1. REPO.READ
  1. REPO.EDIT_NATIVE_OBJECTS
  1. REPO.ACTIVATE_NATIVE_OBJECTS
  1. REPO.MAINTAIN_NATIVE_PACKAGES

on ROOT

Analytic Privilege Roles

 

There can be many analytic privileges assigned to a role. For example: I am creating one single analytic privilege first and then create a role for department with this analytic privilege. In future, more analytic privileges can be added to it. In our case, we are not using analytic privileges which means no attribute restrictions

 

X_HND = D for Data level restriction

                                                                                                                                                                             

Analytic

Privilege

Package  

Content

Attributes

Restrictions

X_HND_CO_AP1

CO

column views under

_SYS_BIC.CO/

NA

X_HND_FI_AP1

FI

All column views under __SYS_BIC.FI/

NA

X_HND_IM _AP1

IM

column views under __SYS_BIC.IM/

NA

X_HND_LE _AP1

LE

column views under _SYS_BIC.LE/

NA

X_HND_MM _AP1

MM  

column views under _SYS_BIC.MM/

NA

X_HND_PP _AP1

PP

column views under _SYS_BIC.PP/

NA

X_HND_PA _AP1

PA

column views under __SYS_BIC.PA/

NA

X_HND_PU _AP1

  PU

column views under _SYS_BIC.PU/

NA

X_HND_SD _AP1

SD

column views under _SYS_BIC.SD/

NA

_SYS_BI_CP_ALL

ROOT

All column views

under  _SYS_BIC

No Restrictions.  Currently being used

Now the Analytic Roles

 

X_HNA = A for Analytic Privilege roles

 

                                                                                                           

Role

Name

Analytic

Privilege

X_HNA_FI

X_HND_FI_AP1

X_HNA_IM

X_HND_IM_AP1

X_HNA_LE

X_HND_LE_AP1

X_HNA_CO

X_HND_CO_AP1

X_HNA_MM

X_HND_MM_AP1

X_HNA_PU

X_HND_PU_AP1

X_HNA_PP

X_HND_PP_AP1

X_HNA_PA

X_HND_PA_AP1

X_HNA_SD

X_HND_SD _AP1

X_HNA_ALL

_SYS_BI_CP_ALL (This one is being used only)

Let's take a look at how we can use system privilege roles to create technical roles:

 

Technical Roles

 

                                     

Role  

Name

Granted  Roles

Y_HNT_SECURTY

 

Add/delete/edit users and assign other roles

X_HNS_USERADMIN

X_HNS_ROLEADMIN

Y_HNT_ADMINS

 

Perform admin tasks and security tasks

X_HNS_USERADMIN

X_HNS_ROLEADMIN

X_HNS_SYSADMIN

X_HNS_SYSMON

X_HNS_CONTENTADMIN

X_HNS_DATAADMIN

Y_HNT_CONTENT_DEVS

 

Create and activate information models in packages

X_HNS_CONTENTADMIN

X_HNO_SCHEMA_READ

X_HNO_CONTENT_WRITE

X_HNO_CONTENT_LIST

X_HNP_ROOT_WRITE

X_HNA_ALL

 

Now, lets take a look at functional role example. In this example, Finance user A need access to FI package and it's information views. So, in this case create a functional role for Finance department and add user A into it.

 

                 

Role  

Name

Granted  Roles

Y_HNF_FI

X_HNO_CONTENT_READ

X_HNO_FI_CONTENT

X_HNP_FI_READ

X_HNA_ALL

 

In the same way we can create other functional roles depending upon our requirements then assign them to user. Now, it is not mandatory that everyone follow this way to setup rule but it can be used as reference.

 

References

  1. 1.  SAP Hana Platform SPS6 Security Guide, 03rd September 2013, SAP HANA Security Guide, SAP Help Portal, http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf
  2. 2.   Tomas Krojzl 2013, ‘SAP HANA – Security Concept and Data Architecture’, SAP Community Network – Tomas Krojzl’s Blog, 24 October 2011, viewed 20 October, 2013

Resolve Performance Issue with HANA Filter Variables (used as prompts)

$
0
0

In this blog, I will discuss the performance issue which I experienced while selecting input variables/filters (prompts) for HANA information view consumed directly in SAP Analysis for Excel. This performance issue might be faced during consuming HANA information view with large amount of data.

 

Versions:

Analysis for Excel: 1.4.5.2837

SAP HANA: Rev 70 SPS 07

SAP HANA Studio: 1.0.7000

Usually in SAP HANA we create variables to filter the data in HANA information view at runtime. For example, in following window we have a variable for "Plant".

 

 

 

Let's look at the filter variable created in SAP HANA for attribute "Plant". This variable will display values from column "Plant" from same information view. When user click on browse button (as shown in above image) then it display these values.

 

 

 

 

When user click on browse button to select value for "Plant" variable then it generates a SQL statement. Let's look at the SQL statement captured from Analysis for excel log file. I ran that query in SAP HANA Studio to capture the execution time.

 

 

 

You can see that query took nearly 5 secs. Now, that means when user click on browse button then they have to wait for 5 secs before they can select value. Later, they have to wait another 5 secs once they selected the value (it runs same query with where clause as I found in analysis logs). So in total they wait 10 secs before they can go to next variable (if needed). Please think if there is variable with more than 100 distinct values. It was not acceptable.

 

We can overcome this issue with little sacrifice. We can query another view which doesn't have so much data or created specifically for "Plant" related data. In our setup we have another package for master data and shared views which we use in other information views. For example, we may need plant name in 10 information views so we can use single view in 10 information views (specifically for star join in calculation view as dimension or in analytic view as attribute view) instead of creating 10 attributes views for plant description (or related data). This also reduces the number of column views created for similar information in number of projects.

 

So I changed the information view for plant variable

 

 

 

Let's take a look at SQL statement after we changed the information view in "value/Table for value" section

 

 

The execution time has changed to 33 ms. This is huge difference. However, we talked about little sacrifice above. When using other view for values in variables it doesn't display the text (label column) with key. It only displays the key. For example: Plant Name is not used in query and therefore not displayed in result of second SQL statement. This could be a bug or feature that we should wait for in upcoming HANA SPS. Please do suggest if you believe it can be fixed. Please share knowledge to gain knowledge.

My stance on SAP HANA certification

$
0
0

On Friday I took the exam C_HANATEC131 and passed with 65%. This came pretty close to my self-assessment that I should pass the test, but that it would be difficult.

First the basics, the SAP certification exams are fair. There is more than enough time to read the questions and trying to find the solution. There are even some questions which are extremely easy to answer if you know the basics of SAP HANA. Some others are explicit syntax questions, maybe this tries to test your hands-on experience with SAP HANA. That is not so relevant for my job, but nevertheless a valid exam topic you can prepare. Having done some certifications already, I see a common trend which leaves me wondering about. So why not writing a short blog about it? As a gut feeling, around 40% of the questions were very difficult to answer although I believe I was firm in the topic. The questions should be concise, because reading long texts with each of the 80 questions is tiresome. In gerneral I believe SAP Education is really doing a good job here. However, in my opinion many questions were ambiguously formulated. This is not just my personal opinion, but several colleagues share that feeling. You read a question and get the information that e.g. 3 answers are correct. It is very easy to spot 2 of the correct answers, but the next one is tricky. You have two candidates for the third correct answer, but both seem to be equally valid or probable! I could argue in both ways why the one or the other answer is the right one. It mainly depends on how to interpret the question. With such questions, the exam becomes more of a "what did the person asking that question have in mind when formulating the question" than a technical examination of your knowledge. I know that creating such exams is really difficult, however how can it be that it is difficult to answer a question on a topic where you are knowledgeable? I could of course take notes and explain to SAP my concerns and reasoning for each of these aprox. 30 problematic questions. Maybe I will do that next time, because there is a risk at failing in such an unfortunate way. But I will need several sheets of paper for that and hit the time limit prematurely.

SAP software is about reverse engineering, having the ABAP sources helps a lot to do bug hunting and analyzing the root cause of problems. For me, SAP exams are primarily about reverse engineering the questions. How is this question meant? How to read between the lines? Which unsaid assumptions by SAP were being made? After the exam, I still don't know the mind setting of the people who developed the questions, I can only see that my assumptions on the assumptions of the question creators were often false. I have no problem with a low score on "Data Provisioning", because I didn't learn that topic well and don't know the details there. I just wonder why just learning the stuff isn't enough, why SAP adds an additional barrier by formulating many questions ambiguously. My memory is notoriously bad, so I cannot replicate a specific example here. (SAP Education wouldn't want me to do so anyway.) Are there other people who took a SAP exam and wondered more about how to interpret a question than on the topic (because you were really familiar with it)?

Hana Certification exam - topics of concern

$
0
0

Can anyone tell me what I need to study in regards to the following for the HANA certification exam.  Is there any documents you can point me to on the following topics:

- RDS

- Security

 

Thanks,

 

Joe Z.

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>