Quantcast
Channel: SAP HANA and In-Memory Computing
Viewing all 927 articles
Browse latest View live

Playing with Images – BLOB data in SAP HANA !!

$
0
0

Hello Everyone,

 

In this blog let us see how we can bind dynamic images (i.e. based on user input) to SAPUI5 Image control.Let’s take an example of storing images of  100 employees and then displaying it as their profile pic based on employee id.

 

Firstly you need to process the cool images and store it in HANA!! Now how do we do that?? There are many ways to do this e.g. using  python,java,etc but I choose the JAVA way to store it as BLOB in HANA Table.. BLOB datatype can store images/audio/video up to 2GB.

 

Below is the code snippet for opening an image file, processing it and storing it in HANA table. Place all your image files in a folder(eg. C:\\Pictures).


public class ImageOnHana  {      public static final String hanaURL = "jdbc:sap://<hostname>:3<instance>15/";      public static final String hanaUser = "AVIR11";      public static final String hanaPassword = "ABCD1234";      public static final String pics = "C:\\Pictures";      public static void main(String[] args) throws IOException, SQLException, ClassNotFoundException {      Class.forName("com.sap.db.jdbc.Driver");      Connection conn = DriverManager.getConnection(hanaURL,hanaUser,hanaPassword); //Open HDB Connection      conn.setAutoCommit(false);      String query = "INSERT INTO \"AVIR11\".\"EMP_IMAGES\" VALUES(?,?)";      PreparedStatement pstmt = conn.prepareStatement(query);      File folder = new File(pics);      File[] images = folder.listFiles();      System.out.println("*****OPEN FILES NOW****");      try {            if (images != null) {                for (File image : images) {                  String imgName = image.getName();                  FileInputStream fis = new FileInputStream(image);                  pstmt = conn.prepareStatement(query);                  String[] parts = imgName.toUpperCase().split(".JPG");                  String id = parts[0];                  pstmt.setInt(1, Integer.parseInt(id));                  pstmt.setBinaryStream(2, fis, (int) image.length());                  pstmt.executeUpdate();                  conn.commit();                  System.out.println(imgName + " image upload to HANA successful");                }             }       } catch (Exception e) {            e.printStackTrace();      }      }
}

Row inserted  - “AVIR11”.”EMP_IMAGES”.Column IMAGE with BLOB datatype

Blog_pic.jpg

For providing this image to the UI lets create a XSJS service that would process the blob data from table. Make sure that the content-type is set to image/jpg.

 

var empId = $.request.parameters.get("empId");
var conn = $.db.getConnection();
try {    var query = "SELECT IMAGE FROM \"AVIR11\".\"EMP_IMAGES\" WHERE ID = ?";    var pstmt = conn.prepareStatement(query);    pstmt.setInteger(1,parseInt(empId));    var rs = pstmt.executeQuery();    if(rs.next()){        $.response.headers.set("Content-Disposition", "Content-Disposition: attachment; filename=image.jpg");        $.response.contentType = 'image/jpg';        $.response.setBody(rs.getBlob(1));    }
} catch (e) {
}                       


conn.close();

Note : Odata does not support BLOB datatype, hence couldn't send the response in Odata.

 

Done!! We are good to go and integrate this service to the UI5 image control !!

<Image src="http://<hostname>:8000/avinash/services/XJ_Emp_Images.xsjs?empId=1"       width="100%" height="150px">      <layoutData><l:GridData span=”” linebreakL=””/></layoutData>            </Image>

Above view.xml snippet shows hardcoded/specific Employee ID. For dynamic Employee id set  <Image id=”image> and refer this id in your controller.xml for setting the source.

 

byId("image").setSrc("http://<hostname>:8000/avinash/services/XJ_Emp_Images.xsjs?empId="+employeeId+"");

Blog_pic2.jpg

Voilà my fav star pic for my Employee Id !!

 

If your scenario is to Upload a file from the UI using an Upload button you can use the SAPUI5 FileUploader control and use XSJS to get the entities The later processing and UI image binding remains the same as above..


Happy Learning !!


Avinash Raju

SAP HANA Consultant

www.exa-ag.com


SAP HANA Distinguished Engineer (HDE) Webinar: Who is my SAP HANA DBA? What can I expect from her/him?

$
0
0

Join the SAP HANA Distinguished Engineer (HDE) Webinar (part of SAP HANA iFG Community Calls) to learn about SAP HANA DBA role and responsibilities.

 

Title:Who is my SAP HANA DBA? What can I expect from her/him?

Speakers:Rajesh Gupta, SAP HANA Distinguished Engineer, Deloitte Consulting LLP

Moderator: Jenny Ly

Date: September 24, 2015
  Time: 8:00 - 9:00 AM Pacific, 11:00 - 12:00 PM Eastern (USA), 5:00 PM CET (Germany)

 

To join the meeting:https://sap.na.pgiconnect.com/i800545

Participant Passcode: 110 891 4496



Germany: 0800 588 9331 tel:08005889331,,,1108914496#


UK: 0800 368 0635 tel:08003680635,,,1108914496#


US and Canada: 1-866-312-7353 tel:+18663127353,,,1108914496#

For all other countries, see the attached meeting request.


meeting.png

 

Abstract: Customers moving to SAP HANA platform, always have a question about who their SAP HANA Database Administration is and what they can expect from her/him. Handling a complex and large SAP HANA database can be very challenging if not configured and manage correctly and effectively. In the organizations with smaller SAP HANA database, BASIS administrators can play a dual role as BASIS and DBA.

Attend this session to learn about:

  • HANA DBA Role & Responsibility
  • Tools for HANA Database Administration
  • SAP HANA DBA periodic Tasks
  • SAP HANA DBA operational task


About Rajesh: Rajesh is an SAP HANA Distinguished Engineer (HDE) and Lead SAP Enterprise Architect at Deloitte Consulting LLP focused on SAP HANA, Migration, and Upgrade. He got over 21 years’ of consulting experience with around 15+ years in SAP and 6+ years as Oracle DBA. Rajesh is ASUG Volunteer and ASUG Enterprise Architecture SIG Group Community Facilitator of the ASUG HQ Team. Rajesh is a certified SAP HANA, NetWeaver, BASIS & OS/DB Consultant.

Background: SAP HANA Distinguished Engineers are the best of the best hand picked by HDE Council that are not only knowledgeable in implementing SAP HANA but also committed to sharing their knowledge with the community.

 

As part of the effort to share experiences made by HDEs, we started this HDE webinar series.

 

This webinar series is part ofSAP HANA International Focus Group (iFG).

JoinSAP HANA International Focus Group (iFG) to gain exclusive access to webinars, access to experts, SAP HANA product feedback, and customer best practices, education, peer-to-peer insights as well as virtual and on-site programs.

You can see the upcoming SAP HANA iFG session detailshere.

 

Follow me on Twitter@rvenumbaka

Little trick to check table filtering on Planviz

$
0
0

I'll share here a little trick we have in Planviz perspective since SP08. Although this is far from a new trick I'm getting surprised everyday by the amount of people working in HANA related projects (HANA Live customizations, Native development, and so on…) that didn’t know that.

 

I took a scenario we had a few weeks back here at SAP Labs just as an example for this post. Basically a query was not performing that well so we could reproduce the issue on a separate box and verify what was going on. I don’t want to focus on the query itself, nor in the underlying models used by it. However, just to give an overall idea, the query was on top of a Scripted Calculation View that used other Graphical Calculation Views within it.

 

This was using a few filters like: documents from Company code (=’1000’), Branch (= ‘U007’) and Document Date (April 2010). Something like:

1.png

 

So, to start off, how do you get to know the tables involved on your query execution? Those can be found in the ‘Tables Used’ View in Planviz perspective. That view will provide you all tables used (persisted and internal) by the plan operators of your query execution.

 

For each table you have there, the very first column of that view gives you the maximum amount of entries processed by each table on one of the plan operators. It’s important to notice that the term ‘entries’ here does not always mean the number of records. Some operators will request the dictionary values of a column and that will be counted as entries as well.

 

2.png

 

If you double click on a table under Tables Used view you’ll be redirected to the Operator List view – yet another super useful view under the Planviz perspective. That view contains some very interesting information regarding the query execution. And now that you filtered the table of interest, all operators are related to your table only

 

Here’s what it looks like when I double click on table J_1BNFDOC in my scenario:

 

3.png

 

There are many interesting columns here. For now, let’s give special attention to Input Rows and Output Rows. Despite the word ‘Rows’, values find there does not always have that meaning. I believe these can be interpreted as entries as well. However, for some operators, the ‘Rows’ term will mean actually records from the tables. By looking at the ones that does not have input rows and output a number of rows > 0 you have a better chance to see actual filtering at table level (specially in *Predicate operators). You can quickly check that by setting the ‘n/a’ value under the ‘Input Rows’ equal filter. Here’s the output in my scenario:

 

4.png

5.png

 

So now we have a starting point to check whether we’re really dealing with the amount of records that we’re supposed to be dealing with. We can ask questions such as: should I be using 17.081 records for my query? Are my filters being used or not?

 

Strangely some of the expected filters applied in the original query were not being used in the J_1BNFDOC table as expected. Company Code (BUKRS), Branch and document date were not showing up as basic predicates in the Planviz. Only DOCTYP <> ‘5’ was executed (which turned out to be an explicit filter done in one of the underlying Graphical Calc Views).

 

Checking the table directly confirmed the information.

 

6.png

 

So we should be looking a maximum of 591 rows instead of 17,631 from that particular table.

 

In this scenario we later discovered that the code behind the query (scripted calc view on top of graphical calc view) was neither using where clauses nor any input parameter placeholders to try filtering out the underlying calculation views. Development was assuming these were going to be pushed down automatically to the SQL on the variable assignments in the script.That might even work for some simpler scenarios, but clearly not for this. It is often difficult to guarantee that this will always hold. Unless you explicitly force it within your code (if you use Scripted CVs of course).

 

So the suggestion to overcome this scenario to either adapt the code so that all filters are applied or switch to pure Graphical Calculation View usage.

 

They have actually decided to reconstruct the whole thing making use of Graphical Calculation Views solely. Kind of like the way HANA Live views works (mostly): making usage of SQL Engine; better separation of models; usage of left/right outer joins whenever possible; cardinality defined where needed and so on..

 

After the changes, filters started to be pushed down nicely. View Tables Used is now presenting sensible numbers:

8.png

Max entries processed for other tables lowered significantly as well (due to the joins involved in the model).

However, it is important to remind that 'entries' not always means rows. But we can double check that in the Operator List view.

For the central table J_1BNFDOC this is what happened:

9.png

So now we can say that we're not dealing with more than 501 rows and the filters were applied nicely: started with the branch filter, passed to the document date filter and finally got to the company code filter. All good now

 

 

And just to make a super short summary of this whole thing, here are the steps to get to your filters:

 

  1. Generate planviz for the query
  2. Go to Tables Used view
  3. Double click the table you want to check filtering
  4. Set value ‘n/a’ on ‘Input Rows’ filter on Operators List view
  5. Check other possible filters upwards (in the graph view) and get the number of records your query is using from that table.
  6. Finally, ask yourself if your query is really processing the exact amount of records that you should be processing


I hope that can help you guys in the future.


PS.: a big big thank you to Roberto Falk who immensely helped in this scenario. Thanks man!


BRs,

 

Lucas de Oliveira

HANA Core Data Services and Sybase PowerDesigner - extending the extension

$
0
0

My colleague Martin Donadio has done a great job creating a nice extension to generate Core Data Services for HANA with Power Designer. His work is very well explained in this post: Generating Core Data Services files with Sybase PowerDesigner

As new features are added for Core Data Services in each HANA SPS, and there are some interesting features not supported by the first version of the extension, I've decided to continue and help to improve it.

First of all, let me explain why I found the extension so useful when I've reached Martin's blog.

 

 

PowerDesigner already works with HANA DB. Why should I bother about an extension for CDS?

 

Yes, it's partially true. But PowerDesigner standard features only create the runtime objects in the HANA catalog. This way, all of your artifacts are not transportable and must be created in each system of your landscape. So, this is not the best option, neither it's recommended by SAP. You should use CDS and only change the catalog for features not supported by CDS, like DB constraints, if you want to use them.

PowerDesigner is a great tool for modelling, including Physical Models, which is the feature used by the extension. But it's not nice to design the model with the tool and then create all the entities manually in HANA. So, this is where the extension is useful.

 

New features added to the extension: associations

 

Associations are useful to define relationships in the entity source, facilitating the maintenance and simplifying the syntax. They also allow the automatic creation of joins in CDS views, when referring the association. Finally, if you work with XS Data Services, it uses the associations to establish the relation between the entities, making programming easier.

Associations do not generate constraints on the DB, and according to Thomas Jung, it's a decision to enforce constraints at application level. So, if you want to use DB constraints, they still need to be defined at runtime level.


Unmanaged associations - why I choose it

 

HANA can work with managed and unmanaged associations. I'll quote Thomas Jung here to explain the difference between them:

"Managed associations generate additional columns in the source table for the association. Unmanaged, on the other hand, use existing columns and don't actually generate anything in to the underlying database table. This gives you a bit more flexibility to do by-directional associations (Header <-> Item). Most of the more complex association types (Backling and many-to-many) both utilize unmanaged associations for exactly this reason."

So, I've decided to use unmanaged associations, for the following reasons:

  1. Flexibility
  2. PowerDesigner automatically adds the foreign key field to the child table when you define a relationship. So, you don't want to have extra fields generated by HANA, as those are being defined in the physical model.
  3. Style. I rather have just a single field called HeaderId in an Item table, rather than a field called Header.HeaderId, with the parent table as a prefix.


So, let's see what changed...

 

 

 

First, if you haven't read Martin's blog, I'd recommend you to do it, because they're well explained step-by-step and I won't reproduce it here.

Let's take the following diagram as an example:

 

SalesDiagram.JPG

 

 

It will generate the following CDS file, with all association and backlinks:

 

SalesCDS.jpg

 

Not all associations are still supported (many-to-many, for example), but I've tested it for the most used types of relationships and it worked.

After importing it into your HANA project, you can improve it defining a view in the file with this simple syntax:

 

    Define view OrderComplete as select from OrderItem

            {

            Id,

            ItemNr,

            OrderHeaderFK.SalesOrg,

            OrderHeaderFK.DocType,

            OrderHeaderFK.SoldTo,

            Material,

            ItemValue,

            OrderScheduleFK.DelivDate

            };

 

It will generate a view with all the joins based on the associations among the three tables.

 

I want to add new features. How can I work with PowerDesigner Extensions?


There's a guide called "Customizing and Extending PowerDesigner" (http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38628.1650/doc/pdf/customizing_powerdesigner.pdf)

This guide explain all possibility of extensions for PowerDesigner. But it has not the objects Metamodel reference. So, after a lot of search, I found out that I could get just accesing the Help menu in PowerDesigner...

 

PowerDesignerMetaModel.jpg

With this, and a little knowledge of VBScript, you can do awesome things. Some suggestions:

  • Create Series Data (mostly used for IoT)
  • Generating sequence files (.hdbsequence)
  • Generating XSOData files based on the model

 

 

The extension file is attached (just remove the .txt extension after downloading ans unzipping it). You can use it for generate as it is or improve it according to your needs.

I'll also update this blog if I'm able to add new features.

SAP HANA for Enterprise Architects – Joining the Pieces Together – SAP HANA Cloud Integration

$
0
0

The Journey Continues - Episode 6 of 10

SAP has acquired a few different “Software as a Service” (SaaS) companies over the last few years. This presents a challenge for architects in how to integrate SAP ERP, Cloud and other external systems in the SAP landscape. The presentation by Sindhu Gangadharan, Vice President and Head of Product Management - HANA Cloud Integration (HCI), as part of our ongoing series for Enterprise Architects, delivered many of the answers I know EAs are after. You may have implemented Success Factors along with other SAP technology and wondered what the roadmap for true integration is. Watch the webcast if you missed it to get all of the details.

 

As the VP responsible for HCI, Sindhu knows what she is talking about. The product roadmap for Hana Cloud Integration showed how SAP is working towards greater integration and data sharing across their products.

 

The presenter started with some background on integration, progressed through customer case studies and rounded out the webcast with live demos and web links for more information. Overall, a logical progression through the material, in an easy to follow format, that should get any Enterprise Architect the background they need to have a management level conversation around SAP cloud integration.

 

To start off, Sindhu went through the compelling reasons for using SAP HANA Cloud Integration (HCI). This included an overview of the SAP ecosystem and where HCI fits into the architecture.

Blog6Slide1.png

All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.

 

Sindhu presented the benefits realized by several SAP customers including Owens Illinois, one of the world's leading manufacturers of glass containers, as well as partners like Applexus and ITelligence.

 

Later on in the webcast we saw a live demo of HCI and the connectors that exist right now for integration between SAP and non-SAP products. I always like live demos as it shows the actual product and the interface users can expect. We saw how to pick the connector between SAP and Success Factors and how to pick the data fields in a typical data exchange scenario.

 

Many of you may recall when SAP first acquired Success Factors, the integration methods were limited and there was a lot of flat file passing between systems. SAP has come a long way since then. In the Success Factors Employee Central context one of the preferred methods of integration was using Dell Boomi for integration between Success Factors Employee Central and SAP ERP. In the presentation we found out that SAP has a roadmap to use HCI as the preferred method of integration going forward.

Blog6Slide3.png

 

There is a free trial program to test-drive HCI for 30 days that Sindhu made the audience aware of; so SAP has made it pretty easy to take it for a test drive and come up with your own integration strategy.

 

The presentation also included great link pages for those looking at Hana Cloud Integration. This includes links to certifications that the service has obtained.

Blog6Slide4.png

Blog6Slide5.png

I know that this webcast answered many of my outstanding questions on how SAP integrates ERP/Cloud/Non-SAP systems together with a few hints at what is coming in the near future. Does HCI meet what you are looking for? - Let me know in the comments or directly emailing me.


Webcast attendees commented with the following key takeaways:

  • (SAP Integration) Can be cloud or on-premise
  • There are a lot of and easy integration options for HCI available.
  • The case study was helpful, and also the upcoming message speeds
  • All the current pre-packaged content was nice to see... esp the future integration with solution manager.
  • Roadmap for HCI and changes related to SF EC using BOOMI
  • SAP has a strong strategy for Cloud integration
  • The cloud is real for integration between SAP products
  • HCI Rocks!

 

In the next webcast scheduled for September 22th, the speaker will be covering how SAP HANA fits into the data center and the different architectural considerations.

 

Watch the Episode 6 webcast recording for more details on this blog entry.

http://event.on24.com/wcc/r/1019296/7200E9AD4FC710E3D4BD377C4A1DF8C3

 

Webcast Materials on ASUG.com: https://www.asug.com/discussions/docs/DOC-42211

 

Complete Webcast Series Details https://www.asug.com/hana-for-ea

 

All webcasts occur at 12:00 p.m. - 1:00 p.m. ET on the days below. Click on the links to see the abstracts and register for each individual webcast.

September 22, 2015: SAP HANA and Data Centre Readiness

September 29, 2015: Why SAP HANA, Why Now and How?

October 6, 2015: Implications of Introducing SAP HANA Into Your Environment

October 13, 2015: Internet of Things and SAP HANA for Business


System Replication Implementation and Testing (part 1)

$
0
0

Hi again,

 

My name is Man-Ted Chan and I’m from the SAP HANA product support team. Recently I’ve been seeing a few issues in regards to High Availability (HA) environment using system replication so I’m writing this piece on setting up the HA along with some troubleshooting tips, and SAP notes.

To avoid confusion with the terminology I will refer to another posting on the SCN:

http://scn.sap.com/docs/DOC-52345

  • System Replication is NOT Host Auto-Failover
  • System Replication is NOT Scale Out
  • System Replication is Disaster Tolerance (DT) / Disaster Recovery (DR)
  • System Replication synchronizes data between two data centers (Site A and Site B)
  • There is always one (logical) primary and one secondary system, e.g. site A is primary and site B is secondary. After a takeover, site B is (logically) primary system. Thus, primary and secondary changes, whereas site A and B will refer to a physical instance.
  • A takeover is making a secondary system functioning as primary system. Note that this explicitly does not include changing the state of the primary (in exceptional/disaster situations, the secondary must not depend on having access to the primary site to be able to change the state)
  • Failback: back to original setup, e.g. a takeover from the backup site to the preferred site: the preferred site may have a better internet connectivity, better reachable by clients, etc.

Also I've had to break up this blog into two parts as I hit a limit on the number of images that can be in a single blog posting.

 

Pre-requisites

  • Have separate primary and secondary server with HANA installed with equal number of services and nodes. The revision of HANA on the secondary server has to be equal to or new than the primary.
  • Secondary system has the same SAP system ID and instance number.
  • Ports 3<instance number>15 and 3<instance number + 1>15 must be available
  • The primary server must have a backup available

Setting up System Replication

These are steps from, but done in an SP09 environment:

http://scn.sap.com/docs/DOC-47702

I have included screen caps, tests, and log snippets

Setting up primary
When setting up the system replication a backup needs to exists, as a test I will show what happens when there is no backup:

1.png

2.png

Right click on your primary system and select ‘Configure System Replication…’

3.png

4.png

5.png

6.png

As we can see we cannot proceed with the replication as there is no backup. In the next few images we will create the backup.

7.png

8.png

9.png

Afterwards try and create the replication again. Please note that field ‘Primary System Logical Name’ can be whatever you want, but I chose the name ‘primary’.

10.png

After this is ran the following can be found in the nameserver trace

==== Starting hdbnsutil, version 1.00.090.00.1416514886 (fa/newdb100_rel),

i Basis            TraceStream.cpp(00469) : MaxOpenFiles: 1048576

i Basis            TraceStream.cpp(00472) : Server Mode: L2 Delta

i Basis            ProcessorInfo.cpp(00713) : Using GDT segment limit to determine current CPU ID

i Basis            Timer.cpp(00650) : Using RDTSC for HR timer

i Memory          AllocatorImpl.cpp(01326) : Allocators activated

i Memory          AllocatorImpl.cpp(01342) : Using big block segment size 8388608

i Basis            TopologyUtil.cpp(03894) : command: hdbnsutil -sr_enable --name=primary --sapcontrol=1

w Environment      Environment.cpp(00295) : Changing environment set SSL_WITH_OPENSSL=0

i sr_nameserver TopologyUtil.cpp(02581) : successfully enabled system as system replication source site

 

If you wanted to use the command line to create this replication run the following:

hdbnsutil -sr_enable --name=< Primary System Logical Name>

After this your system is now enabled for system replication

Setting up the secondary node

You will have to stop the HANA servers on the secondary server prior to setting up the replication. Right click on the server and select ‘Configuration and Monitoring’->’Configure System Replication…’ again, please note that the SID is the same.

11.png

12.png

13.png

14.png

At this step you will name the replication in the ‘Secondary System Logical Name’ and enter the host from the above (note that the Instance number is non-editable)

Replication mode options that are available are the following:

  • Synchronous with full sync option (mode=sync. Full sync is configured with the parameter [system_replication]/enable_full_sync) means that log write is successful when the log buffer has been written to the logfile of the primary and the secondary instance. In addition, when the secondary system is disconnected (for example, because of network failure) the primary systems suspends transaction processing until the connection to the secondary system is re-established. No data loss occurs in this scenario.
  • Synchronous (mode=sync) means the log write is considered as successful when the log entry has been written to the log volume of the primary and the secondary instance.

When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk.

No data loss occurs in this scenario as long as the secondary system is connected. Data loss can occur, when a takeover is executed while the secondary system is disconnected.

  • Synchronous in memory (mode=syncmem) means the log write is considered as successful, when the log entry has been written to the log volume of the primary and sending the log has been acknowledged by the secondary instance after copying to memory.

When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk.

Data loss can occur when primary and secondary fail at the same time as long as the secondary system is connected or a takeover is executed, when the secondary system is disconnected.. This option provides better performance, because it is not necessary to wait for disk I/O on the secondary instance, but is more vulnerable to data loss.

  • Asynchronous (mode=async): The primary system sends redo log buffers to the secondary system asynchronously. The primary system commits a transaction when it has been written to the log file of the primary system and sent to the secondary system through the network. It does not wait for confirmation from the secondary system. This option provides better performance because it is not necessary to wait for log I/O on the secondary system. Database consistency across all services on the secondary system is guaranteed. However, it is more vulnerable to data loss. Data changes may be lost on takeover.

The above is from the SAP HANA Admin guide:

http://help.sap.com/saphelp_hanaplatform/helpdata/en/54/01f498b2c84fb5b3bcdcbda948d991/content.htm?frameset=/en/05/caef5b24794dc2accc4fb6561e26fa/frameset.htm&current_toc=/en/00/0ca1e3486640ef8b884cdf1a050fbb/plain.htm&node_id=440

 

This can be done via the command line

hdbnsutil -sr_register --remoteHost=<primary hostname> --remoteInstance= --mode=<sync/syncmem/async> - -name=< Secondary System Logical Name>

 

During this registration I ran into the following error

15.png

I then ran it via the command line to show the error

16.png

17.png

I checked the listed nameserver trace to see if there is any other information

==== Starting hdbnsutil, version 1.00.090.00.1416514886 (fa/newdb100_rel),

e Configuration    ConfigStoreManager.cpp(00693) : Configuration directory does not exist.

e Configuration   

  1. TopologyUtil.cpp(03894) : command: hdbnsutil -sr_register --remoteHost=xxxxx509 --remoteInstance=00 --mode=sync -name=sec

e sr_nameserver    TNSClient.cpp(06778) : remoteHost does not match with any host of the source site. all hosts of source and target site must be able to resolve all hostnames of both sites correctly

 

From this error we can see that the landscape between the two do not match. I checked the landscape in the primary and secondary

18.png

19.png

Here we can see that in secondary server there is the ‘sapstartsrv’ process. After this is resolved re-run the wizard or enter in the hdbnsutil command

20.png

‘Initial full data shipping’ is the equivalent to running hdbnsutil –sr_register –force_full_replica

If parameter is set, a full data shipping is initiated. Otherwise a delta data shipping is attempted.

If you run this via command line you will have to manually start up the secondary server

For more information on the hdbnsutil options please refer to the following reference guide

http://help.sap.com/saphelp_hanaplatform/helpdata/en/4d/8f09ec3f2c49f593d415c78e924d9b/content.htm?frameset=/en/52/913ed4a8db41aebef3ce4563c6f089/frameset.htm&current_toc=/en/00/0ca1e3486640ef8b884cdf1a050fbb/plain.htm&node_id=443

 

Checking the status of the replication

You can check the status of the replication in studio and via command line. The below screen caps

21.png

22.png

23.png

Check the name server trace to see following success messages upon startup to see the replication:

TREXNameServer.cpp(12634) : called registerDatacenter from registrator=xxxx301545c

i sr_nameserver    TREXNameServer.cpp(12776) : registerDatacenter; new disaster recovery site id =2

i sr_nameserver    TREXNameServer.cpp(12864) : matched host xxxx509 to xxxx301545c

i sr_nameserver    TREXNameServer.cpp(15138) : volume 1 successfully initialized for system replication

i sr_nameserver    TREXNameServer.cpp(15138) : volume 2 successfully initialized for system replication

i sr_nameserver    TREXNameServer.cpp(15138) : volume 4 successfully initialized for system replication

i sr_nameserver TREXNameServer.cpp(15138) : volume 3 successfully initialized for system replication

24.png

Please note that when you add the replication server you’ll notice that you cannot open the Administration panel or run SQL queries. So you will not be able to check the data in the replication server.

Instead you are opening Diagnosis mode, below the screen cap shows the difference between the 2


Diagnosis Mode

25.png

Admin Panel

26.png

Click here for part 2

System Replication Implementation and Testing (part 2)

$
0
0

Hi again,

 

My name is Man-Ted Chan and I’m from the SAP HANA product support team. This is part 2 to my High Availability/System Replication blog, part 1 can be found here.

 

This will continue where the last blog left off

 

How to turn off replication

First we will unregister the secondary server, this means no more data from the primary will go to this server:

27.png

28.png

29.png

30.png

After this have been unregistered we can check the hdbnsutil –sr_state to confirm this:

31.png

However, if you check the primary node you will see that the replication is still enabled, but no server for the replication is listed.

32.png

Next we can disable the replication on the primary

33.png

34.png

35.png

Once this is done you can check the replication tab and hdbnsutil –sr_state

36.png

37.png

As a test, I stopped the primary to see what happens on

38.png

 

 

Other things tested during this phase

As a test I stopped the primary to see what happen to the replication. No automated takeover will occur, but we will see the following network communication errors in the trace files

e Stream NetworkChannelCompletion.cpp(00524) : NetworkChannelCompletionThread #2 NetworkChannel FD 28 [0x00007fc028072818] {refCnt=3, idx=2} 10.97.22.172/0_tcp->10.97.22.172/30103_tcp ConnectWait,[---c]

: Error in asynchronous stream event: exception  1: no.2110001 (Basis/IO/Stream/impl/NetworkChannelCompletion.cpp:450)

    Generic stream error: getsockopt, Event=EPOLLERR - , rc=111: Connection refused

Please note that if you stop the replication server the primary server will throw the following alerts

ReplicationError with state INFO with event ID  1 occurred at <DATE> on xxxx36f509:30007. Additional info: Communication channel closed

Associated with Alert ID 78

The following error will be found in the trace files

e TNS TNSClient.cpp(00671) : sendRequest dr_getremotereplicationinfo to xxxx301545c:30001 failed with NetException. data=(I)drsender=1|

e sr_nameserver TNSClient.cpp(06880) : error when sending request 'dr_getremotereplicationinfo' to xxxx301545c:30102: connection refused,location=xxxx301545c:30001

i EventHandler EventManagerImpl.cpp(00602) : acknowledge: ReplicationEvent(): Communication channel closed

 

 

If you run into this alert in your own system you should check to see if the secondary node is down (can you start it or was there a crash?)

 

How to perform a takeover

*Please note that performing a takeover should be done only if there is an issue if the primary or if you would like zero down during a HANA upgrade
Right click on the secondary node and open the “Configure System Replication”

39.png

40.png

41.png

At an OS level you will see the takeover process

42.png

To perform the takeover via the command prompt you would run the following on the secondary server:

Hdbnsutil –sr_takeover

*After the takeover a new server needed to be made so the server name is different from 301545c to 59e3753f1

Please note on your replication server you will now be able to open the admin panel and not just the diagnosis mode (in the diagnosis mode only ‘Processes’, ‘Diagnosis Files’, and ‘Emergency Information’ tabs are available)

On the old primary server and old replication we can check the Landscape->System Replication and see there is no replication

43.png

Since the replication hasn’t been disabled we will see the communication errors again on the original primary

i EventHandler EventManagerImpl.cpp(00780) : --removeAllEvents: ReplicationEvent(): Communication channel closed

 

 

On the old replication server the nameserver trace will show the following during the takeover if it was successful

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 2 returned successfully

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 4 returned successfully

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 3 returned successfully

i sr_nameserver TREXNameServer.cpp(15703) : issueing "/usr/sap/MV1/SYS/global/hdb/install/bin/hdbupdrep -s MV1 --user_store_key=SRTAKEOVER -b"

i sr_nameserver TREXNameServer.cpp(15686) : reconfiguring all services

 

 

Check the global.ini and nameserver.ini on the secondary node (the primary will not change)

/usr/sap/MV1/global/hdb/custom/config> cat global.ini

[system_replication]

site_id = 2

mode = sync

actual_mode = primary

site_name = rep

 

 

mo-59e3753f1:/usr/sap/MV1/global/hdb/custom/config> cat nameserver.ini

[landscape]

id = 55de6934-1b45-7f0a-e100-00000a6116ac

master = mo-59e3753f1:30001

worker = mo-59e3753f1

active_master = mo-59e3753f1:30001

idsr = 55f36543-7352-8161-e100-00000a61131b

roles_mo-59e3753f1 = worker

 

Memory

In order to minimize memory consumption, the following parameters should be set in the secondary system:

 

 

1) global.ini/[system_replication]/preload_column_tables = false

2) global.ini/[memorymanager]/global_allocation_limit =

<size_of_row_store + 20%>

 

 

If the parameter "preload_column_tables" is set to "true" on the secondary side, the secondary system will dynamically load tables into memory according to the preload information shipped from the primary side.

During the takeover procedure, the "global_allocation_limit" should be increased on the secondary side to the same value as on the primary side.

 

Memory on the primary can be consumed in async mode there is a log buffer that gets loaded and then sent over to the secondary, the amount of memory this takes up is set by

  1. global.ini -> [system_replication] -> logshipping_async_buffer_size = <size_in_byte>

 

Tracing

 

 

For additional information during a takeover please run the following

alter system alter configuration ('nameserver.ini','SYSTEM') SET ('trace','failover')='debug' with reconfigure;

alter system alter configuration ('nameserver.ini','SYSTEM') SET ('trace','ha_provider')='debug' with reconfigure;

Perform failover test. Once done you can turne off this tracing

alter system alter configuration ('nameserver.ini','SYSTEM') UNSET ('trace','failover') with reconfigure;

alter system alter configuration ('nameserver.ini','SYSTEM') UNSET ('trace','ha_provider') with reconfigure;

For general tracing during the replication you can go edit in the SAP HANA studio global.ini-> trace-> sr_dataaccess = debug and studio global.ini-> trace->stream= debug. This will add additional tracing in the indexserver trace.

 

References

System Replication Configuration Parameters

http://help.sap.com/saphelp_hanaplatform/helpdata/en/0c/d257970d514abd8ddf9ee1f45f3bca/content.htm?fullscreen=true

 

 

 

 

 

 

Issues Encountered

Misc.

-After SP9 users ran into Alert 79, Configuration Parameter Mismatch, to resolve this you can edit global.ini->system_replication->keep_old_style_alert = false

The ini’s will still be mismatched, but the alert will stop appearing. User can manually check the mismatches, or can go to /usr/sap/<SID>/global/hdb/customer/config and copy from the primary and paste it to the secondary, but do not overwrite global.ini->system_replication and nameserver.ini->landscape section as this will break replication. Another option you can do is run the SQL script to find the differences:

HANA_Replication_SystemReplication_ParameterDeviations

 

 

Network Related

-‘Communication Channel Closed’ errors, the replication server is either down or there is a networking error. (Check to see if the HANA services are running, if they are talk to your networking team about blocked ports)

-(DataAccess/impl/DisasterRecoveryProtocol.cpp:3478) Asynchronous Replication Buffer is Overloaded exception throw location:

This error occurs only if you choose ASYNC replication, this can occur if there is a slowness in the network. You can check your network statistics on with the following table

HOST_VOLUME_IO_TOTAL_STATISTICS or run the SQL script

HANA_Replication_SystemReplication_Bandwidth

If you need to resolve this issue prior to looking into you network you can do one of the following:

1) Change the replication mode, -sr_change mode –mode= sync|syncmem

2) Change global.ini->system_replication->logshipping_async_wait_on_buffer_full = false, this will temporarily decouple the synchronization.

 

 

Registration fails

 

 

Issue:

Unable to contact primary site error: at 30001

Solution:

Check the host name you have entered, something’s to check:

The hostnames are unique

The secondary host name is not a substring of the primary

Do not use the IP address

Issue:

f sr_nameserver TREXNameServer.cpp(10651) : remoteHost does not match with any host of the source site. Please ensure that all hosts of source and target site

Can’t resolve all hostnames of both sites correctly.

Solution:

Run the following query and

select name from m_topology_tree where path = '/host/'

 

 

Startup of secondary fails

Issue:

Secondary nameserver starup fails after registration of secondary to primary: TREXNameServer.cpp(02876) : source site is not active, cannot start secondary site. Please run hdbnsutil -sr_takeover in case of a disaster or start primary site first. -> stopping instance ..


Solution:

Do not use secondary hostnames that are substring of primary hostnames.

 

Issue:

nameserver server:30001 not responding.

collecting information ...

error: source system and target system have overlapping logical

hostnames; each site must have a unique set of logical hostnames.

hdbrename can be used to change names;

failed.

 

Solution:

This is caused by connection timeouts, but if you see it only for a few services check to see if the landscape are the same.

 

 

MultiDB issue

Issue:

"unhandled ltt exception: exception 1000003:

Index 1 out of range [0, 0)" when i check the sr_state after running


Solution:

Resolved in 97.01 and 102

 

Takeover

Issue:

i LogReplay RowStoreTransactionCallback.cc(00226) : starting master-slave DTX consistency check

e LogReplay RowStoreTransactionCallback.cc(00264) : Slave volume 3 is not available

 

Solutions:

Resolved in rev 74.04 and 82

 

Work around:

1) Add following INI parameters as 'false' in indexserver.ini and statisticserver.ini

[transaction]

check_slave_on_master_restart = false

check_global_trans_consistency = false

2) The, restart your system.

 

Issue:

From time to time the takeover process hangs

w Backup BackupMonitor_TransferQueue.cpp(00048) : Master index server not available! Following trace Entries are in written to the trace file, and there is a time gap in the trace of 30m: [11596]{-1}[-1/-

i PersistenceManag PersistenceManagerImpl.cpp(02359) : Activating periodic savepoint, frequency 300 e TrexNet Channel.cpp(00362) : active channel 33 from 53223 to 127.0.0.1:30001: reading failed with timeout error; timeout=1800000ms elapsed

 

Solution:

There is no work around, this issue is fixed in 85.02 and 90

 

Issue:

If a takeover is performed on a secondary system where not all tenants could be taken over (e.g. because they were not initialized yet) then the takeover flag is not removed from the topolgy (/topology/datacenters/takeover/*)

 

Solution:

Resolved in HANA 10.1

 

 

Crash on secondary

indexserver crash at DataRecovery::LoggerImpl::IsSecondaryBackupHistoryComplete on the secondary system.

The bug is fixed as of revision 90 so a permanent solution is available via an upgrade.

In the interim the workaround to the issue is the setting of the parameter [system_replication] ensure_backup_history = false within the global.ini file.

The setting of this parameter disables the maintenance of the backup history.  The takeover process is not affected by this parameter but full recovery scenarios after takeover (using old primary data/log backups with new primary log backups) may be impacted.

 

 

SAP Notes

1995412 - Secondary site of System Replication runs out of disk space due to closed data shipping connection

1945676 - Correct usage of hdbnsutil -sr_unregister

2057595 - FAQ: SAP HANA High Availability

2100052 - How to disable parameter mismatch alert for system replication

2050830 - Registering a secondary system via HANA Studio fails with error 'remoteHost does not match with any host of the source site'

2021186 - Garbage collection takes a long time during HANA service restart

2075771 - SAP HANA DB: System Replication - Possible persistence corruption on secondary site

1852017 - Error 10061 when connecting SAP Instances to failed over HANA nodes

2063657 - HANA System Replication takeover decision guideline

2062631 - high availability limitation for SAN storage

2129651 - Indexserver crash caused by inconsistent log position when startup

1681092 - Multiple SAP HANA DBMSs (SIDs) on one SAP HANA system

2033624 -System replication: Secondary system hangs during takeover

2081563 - secondary system's replication mode and replication status changed to "UNKNOWN"

2135107 - Log segment for backup history is still missing after reconnect with log shipping

Creating a Rugby World Cup Sentiment Tracker

$
0
0

With the Rugby World Cup now on, I decided to put some of the SAP kit bag to the test.

The latest output of this *should* be automatically republished daily at 22:00 BST to Lumira Cloud, allowing you to interact with it.

http://tiny.cc/RWCTweets

Rugby Tweet Analysis v2.png

From the 18th to the 23rd September I have already captured 1.2 million tweets with the #RWC2015 Twitter Feed.  I hope to keep the data capture running throughout the tournament

 

In this example have used

1. Smart Data Integration (SDI) within SAP HANA to acquire the tweets from Twitter in real time from the #RWC2015 feed

2. SAP HANA to store, process and the data

3. Text Analysis to turn Tweets into a structured form

4. Text Mining to identify Relevant Terms

5. SAP HANA Studio to model

6. SAP Lumira Desktop to create some analytics

7. SAP Lumira Cloud to expose the output

 

 

1. Data Acquisition through the SDI Data Provisioning Agent

From HANA SPS 09 Smart Data Integration has been added directly in HANA. One of the data provisioning (DP) sources available is a Twitter.  I won't repeat the steps to setup the DP agent here, as Bob has created a great series of SAP HANA Academy videos of this setup here.

SAP HANA Academy - Smart Data Integration/Quality : Twitter Replication Pt 1 of 3 [SPS09] - YouTube

 

With the virtual table now available in HANA you can make this real-time by issuing the following SQL.

 

SET SCHEMA HANA_EIM;
--Create SDA Virtual Table
CREATE VIRTUAL TABLE "HANA_EIM"."RWC_R_STATUS" at
"TWITTER"."<NULL>"."<NULL>"."status";
--Create a target table
create COLUMN table "HANA_EIM"."RWC_T_STATUS" like "HANA_EIM"."RWC_R_STATUS";
--Create Subscriptions
create remote subscription "HANA_EIM"."rt_trig1"
as (select * from "HANA_EIM"."RWC_R_STATUS" where "Tweet" like '%#RWC2015%')
target table "HANA_EIM"."RWC_T_STATUS";
--SELECT * FROM "HANA_EIM"."RWC_T_STATUS";
--truncate table "HANA_EIM"."RWC_T_STATUS";
--Queue the subscription and start streaming.
alter remote subscription "HANA_EIM"."rt_trig1" queue;
alter remote subscription "HANA_EIM"."rt_trig1" distribute;
select count(*) from "HANA_EIM"."RWC_T_STATUS";
--Stop Subscription
--ALTER REMOTE SUBSCRIPTION "rt_trig1" RESET;

 

With the data now being acquired "automatically" it's possible to monitor the acquisition via the XS Monitoring URL http://ukhana.mo.sap.corp:8000/sap/hana/im/dp/monitor/?view=DPSubscriptionMonitor

DPSubscriptionMonitor.png

3. Text Analysis

As I previously described Using Custom Dictionaries with Text Analysis in HANA SPS9, for Formula One Twitter Analysis creating custom dictionaries for your subject area is very easy.

I've added one to include the Rugby teams, Twitter handle and short name.  This new dictionary was included in a new configuration.

HANA Web IDE.png

To turn on Text Analysis on the acquired twitter data, use the following syntax

CREATE FULLTEXT INDEX "RWC-TWEETS" ON "HANA_EIM"."RWC_T_STATUS"("Tweet")
CONFIGURATION 'RWC::RUGBY_SOCIAL_CONFIG'
FAST PREPROCESS OFF
LANGUAGE COLUMN "isoLanguageCode"
LANGUAGE DETECTION ('EN','FR','DE','ES','ZH','IT')
TEXT ANALYSIS ON
TEXT MINING ON
FUZZY SEARCH INDEX ON

 

Text Analysis is really clever and identifies some useful elements, beyond the basics. Who, Where, When, etc.  The more advanced output is often known as fact extraction, of these "facts" Sentiment, Emotion and Requests are three of these that could potentially be useful in the Rugby Tweet data.

 

4. Text Mining the Tweets

Now I wanted to try something more than just sentiment, mentions and emotion.  For this I decided to use Text Mining which is also built into HANA, and has been further enhanced is SPS10 with SQL access to Text Mining functions.  Activating Text Mining is very easy, it's done when when specifying the FULL TEXT index by using the syntax as above TEXT MINING ON.

 

Text Mining has multiple capabilities which are applicable at a document level, for this I treated each Tweet as a document which served a purpose. As tweets by nature are very short you don't gain that much additional insight from the document level analysis.

 

SELECT *
FROM TM_GET_RELEVANT_TERMS (
DOCUMENT IN FULLTEXT INDEX WHERE "Tweet" like '%England%'
SEARCH "Tweet" FROM "HANA_EIM"."RWC_T_STATUS"
RETURN
TOP 16
) AS T

 

After investigating the Text Mining functions TM_GET_RELEVANT_TERMS and TM_GET_RELATED_TERMS with Twitter data I found the core Text Analysis functions to be more than capable for my analysis purposes. If however I was analyzing news reports, blogs or documents then Text Mining would be much more appropriate

Text Mining Output.png

 

5. HANA Modelling

This piece took the longest and was fairly challenging as you need to model the Tweets with final output in mind.  This turns the structured $TA table into a format suitable for analysis in Lumira (or other BI tool) by identifying the entities and the relationships, Countries, Tweets, Sentiment.

 

I created 2 Calculation Views in HANA Studio, they are still a work in progress, but are sufficient to give some useful output.

I felt it easier to create 2 as they are at different levels of granularity. One is at the Country level, the other at Country, Key Word

Text_Analysis_Calc_View_Annotated.png

Text_Analysis_Words_CV_Annotated.png

6. SAP Lumira Desktop to create some visualisations

With the modelling and manipulation taken care of in HANA, using Lumira is then easy (although you can spend some time perfecting your final output).  Here we can build some visualisations as below and then encapsulate them into a story board.

Screen Shot 2015-09-23 at 10.34.32.png

My original visualisations have now been greatly enhanced by Daniel Davis into a great Lumira Story.

Daniel has also created a England Rugby Wall chart available for download from here http://www.thedavisgang.com/

Screen Shot 2015-09-23 at 10.46.32.png

7. SAP Lumira Cloud

To share the output in an interactive way we can publish the visualisaitons, stories and dataset to SAP Lumira Cloud.  There's one crucial story option "Refresh page on open" that is required to  update the visualisations within the story which by default is OFF. Set this to ON and the story also gets updated.

 

Lumira Desktop has a scheduling agent built in, once enabled it can automatically refresh and republish to Lumira Cloud.

I have set this to refresh the Rugby Tweet Analysis every day at 22:00

 

Within Lumira Cloud we now need to make the story public, this is set under the Story optionsLumira Cloud Share.png

Change Access.png

Public.png

We now have the URL which can be shared with others, for ease of consumption I created a Short URL pointing to this long URL with http://tiny.cc/

 

To View the full interactive Lumira Story Board please use the link below

http://tiny.cc/RWCTweets


SAP Total Margin Management - Understanding the Past to Model the Future

$
0
0

At SAPPHIRE this year, you may have seen ConAgra win a HANA Innovation award for the work they have done with SAP on a new solution called SAP Total Margin Management based on SAP HANA.

 

ConAgra Foods, Inc. in one of the North America's largest packaged food companies with branded and private branded food found in 99 percent of America's households, as well as a strong commercial foods business serving restaurants and foodservice operations globally. Within this industry, increasing competitive pressure requires more accurate forecasts of future costs in order to maximize margin.

 

The company partnered with SAP to co-innovate on a margin management solution that provided visibility to costs at the lowest level of granularity, and the forecasting capabilities to model scenarios to better predict the future.

 

There are two key components to SAP Total Margin Management:

 

The first provides a better ability to understand the past by:

  • Decomposing complex Bill of Materials
  • Creating models based on history
  • Break those models down by customer / product combinations
  • Allow these models and drivers to be used for forecasting and scenario analysis
  • By breaking everything down to a base level where you are able to compare like with like

 

The second key component is the ability to efficiently model the future by:

  • Converting drivers to levers
  • Take into consideration inventory position when projecting the future
  • Having the information available at a level of detail necessary to provide "margin flow analysis", which is to understand variances based on price, product mix and volume.

 

A new video showcasing the power of the solution is not available here

 

Good Cost management identifies items where increased cost level must be understood, addressed or taken into account in pricing and operational planning, allowing you to respond to changing market conditions speedily. SAP Total Margin Management helps you to understand the Profit and Loss Statement at any and every level of the business - by customer, or product, by brand, or by area of responsibility.

 

SAP HANA is the high-speed in-memory platform needed to process and visualize this amount of information. Users can quickly perform iterative scenarios and "What If" forecasting on large amounts of data.

 

SAP Total Margin Management is generally available as of May 2015 and is an excellent example of how SAP is working directly with customers to solve real business problems and help their businesses RUN SIMPLE.

DYNAMISM OF FUZZY SEARCH IN SAP HANA

$
0
0

Hello Experts,


This blog is about one of the feature that SAP HANA provides, FUZZY SEARCH.


Now the question arises, what is Fuzzy search?!... So, Fuzzy search is the technique of finding strings that match a pattern approximately (rather than exactly). It is a type of search that will find matches even when users misspell words or enter only partial words for the search. It is also known as approximate string matching.


According to Fuzzy Search Reference guide, Fuzzy Search is a fast and fault-tolerant search feature for SAP HANA. The term “fault-tolerant search” means that a database query returns records even if the search term (the user input) contains additional or missing characters, or other types of spelling error.


Fuzzy search can be used in various applications, like:

  • Fault-tolerant check for Misspelled words and typos
  • Fault-tolerant search in text columns
  • Fault-tolerant search in structured database content
  • Fault-tolerant check for duplicate records

 

The best real world example of such fault-tolerant search is when you type “The United States of Amerika” in the Google Search, it automatically displays result for “The United States of America”.

 

In SAP HANA, Fuzzy Search can be called by using the CONTAINS() predicate with the FUZZY() option in the WHERE clause of a SELECT statement.

 

The basic SYNTAX is:

       

SELECT * FROM <tablename> WHERE CONTAINS (<column_name>, <search_string>, FUZZY (x))


Where, x is an argument that defines fuzzy threshold. It ranges from 0.0 to 1.0 and defines the level of error tolerance for the search. A search with FUZZY(x) returns all values that have a fuzzy score greater than or equal to x.

 

Fuzzy Search can only be applied for:

  • Column Table
  • Attribute View
  • Also on SQL views (created with the CREATE VIEW statement), and on joins of multiple tables and views, in some cases

       

          having column types as:

    • String (VARCHAR, NVARCHAR)
    • Text (TEXT, SHORTTEXT, FULLTEXT INDEX)
    • DATE

 

The CONTAINS() predicate can be used in the WHERE clause of a SELECT statement. It performs:

  1. A free style search on multiple columns
  2. A full-text search on one column containing large documents
  3. A search on one database column containing structured data

 

The type of search it performs depends on its arguments.

 

 

The SCORE() Function

 

The fuzzy search algorithm calculates a fuzzy score for each comparison, the SCORE() function can be used to retrieve the score. This is a numeric value between 0.0 and 1.0.

 

The score defines the similarity between the user input and the records returned by the search. A score of 1.0 means the strings are identical. A score of 0.0 means that there is no similarity. The higher the score, the more similar a record is to the search input.

 

We can request the score in the SELECT statement by using the SCORE() function. You can sort the results of a query by score in descending order to get the best records first (the best record is the record that is most similar to the user input). When more than one CONTAINS() is given in the WHERE clause or multiple columns is used in a SELECT statement, the score is calculated as a weighted average of the scores of all columns.

 

For example, consider we have a column table with two fields (ID integer, TXT TEXT) having values like different variations of word ‘hello world’. Then the Fuzzy Search for word ‘hello’ with Score will return the following:

 

SELECT TO_DECIMAL(SCORE(),3,2) AS score, * FROM <table_name> WHERE CONTAINS(txt, 'Hello', FUZZY(0.8))
ORDER BY score DESC;

1.png

Here, the words ‘hello’ and ‘Hello’ are having the score as 1, since the string matches completely. Whereas, word ‘ello’ is having the lowest score.


We can specify additional search options that change the default behavior of the fuzzy search as an additional string parameter for the FUZZY() function.

 

There are so many possible combinations of search options available. Lets try out combination of FUZZY() with similarCalculationMode.

 

Step 1. Create 1 column table as:

 

create column table <table_name>(
ID integer,
TXT varchar(20));


Step 2. Run following commands to Insert values into the table:


insert into <table_name> values(1,'hello');
insert into <table_name> values(3,'hell');
insert into <table_name> values(4,'hel');
insert into <table_name> values(5,'ello');
insert into <table_name> values(7,'hello world');
insert into <table_name> values(8,'hell world');
insert into <table_name> values(14,'helloworld');
insert into <table_name> values(15,'hellworld');
insert into <table_name> values(16,'HelloWorld');
insert into <table_name> values(17,'HELLO');
insert into <table_name> values(21,'world');
insert into <table_name> values(22,'word');


Step 3. Perform string search with optionsimilarCalculationMode


SELECT TO_DECIMAL(SCORE(),3,2) AS score, * FROM <table_name>
WHERE CONTAINS(txt, 'Hello', FUZZY(0.8,'similarCalculationMode=compare'))
ORDER BY score DESC;

    

     We will get the output as:

     2.png

     Here, the FUZZY() compares all the strings in the table with the search string and gives the best matching results having SCORE() greater than 0.8

 

SELECT TO_DECIMAL(SCORE(),3,2) AS score, * FROM <table_name>
WHERE CONTAINS(txt, 'Hello', FUZZY(0.8,'similarCalculationMode=search'))
ORDER BY score DESC;

    

     We will get the output as:

     3.png

     Here, the FUZZY() searches all the strings in the table with the search string and gives the best matching results having SCORE() greater than 0.8. Notice the difference between search and compare here. As the part of result it also includes the strings composed of two words.

 

SELECT TO_DECIMAL(SCORE(),3,2) AS score, * FROM <table_name>
WHERE CONTAINS(txt, 'Hello', FUZZY(0.8,'similarCalculationMode=substringsearch'))
ORDER BY score DESC;

    

     We will get the output as:

     4.png

     Here, the FUZZY() searches all the strings in the table having substring ‘hello’ as the search string and gives the best matching results having SCORE() greater than 0.8.


Similarly, we can try other Available Properties of FUZZY SEARCH mentioned in the reference guide. Also, we can try different combinations of these properties to get the best possible result as per the requirement.

 

I hope you liked my first blog.

Happy Learning!

 

Thanks,

Pragati Gupta

Securing the Communication between SAP HANA Studio and SAP HANA Server through SSL

$
0
0

Hello Everyone,

 

This blog shows you, how to secure the communication between HANA Server and HANA Studio through SSL. It is highly recommended when there are lot of sensitive data handled in the system, which you want to secure from the middle-man attacks. There could be multiple documents available in SCN on this topic, but here I wants to show my experience on setting this up, in short time.

 

Pre-requisites:

  • HANA Server is installed and running
  • HANA studio is installed in the local system
  • Access to the HANA server
  • Putty / WinSCP tools

 

HANA Server and client without SSL configured:

1.JPG

2.JPG

 

Steps need to be performed in HANA Server:


Login to HANA server system using Putty, as a root user and check if the libssl.so file exists. If not, create a symbolic link to libssl.so.0.9.8.

 

3.jpg

 

Now login to HANA server system, as a “<sid>adm” user.

 

4.jpg

 

Create the Root Certificate:


  1. Go to Home directory “/usr/sap/<sid>/home”
  2. Create directory with a same “.ssl”
  3. Get into “.ssl” directory

5.JPG

   4.  Execute the following command

openssl req -new -x509 -newkey rsa:2048 -days 3650 -sha1 -keyout CA_Key.pem -out CA_Cert.pem -extensions v3_ca6.JPG

   5.   Enter the relevant details

7.jpg

   6.   This will create couple of files (CA_Cert.pem and CA_Key.pem) in “.ssl” directory

8.JPG

 

Create the Server Certificate:


  1. Get into “.ssl” directory
  2. Execute the following command and Enter the relevant details

openssl req -newkey rsa:2048 -days 365 -sha1 -keyout Server_Key.pem -out Server_Req.pem -nodes9.JPG

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10.jpg

 

   3.   This will create a couple of additional files (Server_Key.pem and Server_Req.pem) in “.ssl” directory

   4.   At this time, you will have 4 .pem files under “.ssl” directory

 

11.JPG

 

Sign the Server Certificate:


  1. Get into “.ssl” directory
  2. Execute the following command and Enter the relevant details

openssl x509 -req -days 365 -in Server_Req.pem -sha1 -extfile /etc/ssl/openssl.cnf -extensions usr_cert -CA CA_Cert.pem -CAkey CA_Key.pem -CAcreateserial -out Server_Cert.pem

12.JPG

   3.   At this time, you will additionally have one new .pem file(Server_Cert.pem) and one new .srl file(CA_Cert.srl) created under “.ssl” directory as shown above

 

Chain the Certificate:


  1. Get into “.ssl” directory
  2. Execute the following command

cat Server_Cert.pem Server_Key.pem CA_Cert.pem > key.pem

   3.   At this time, you will additionally have one new .pem file(key.pem) created under “.ssl” directory. Totally there will be 7 files under this directory

13.JPG


Copy the Certificate:


  1. Get into “.ssl” directory
  2. Execute the following command

cp CA_Cert.pem trust.pem

   3.   This will create one new trust.pem file, as you just did a copy

 

14.JPG


Restart HANA Server:


  1. Go to /usr/sap/<sid>/HDB<InstNo>
  2. Stop the HANA Server using ./HDB stop and then start the HANA server using ./HDB start

15.JPG

 

Steps need to be performed in HANA Studio:


Copy “trust.pem” to local client:


Using WinSCP Tool copy the trust.pem from “.ssl” directory to c:\temp\

16.jpg

Import “trust.pem”:

 

  1. As user ‘Administrator’, or with administrative access, import trust.pem into Java’s keystore. This can be done as below
  2. Copy the Java bin directory location from HANA Studio

17.jpg

   3.   Run the Command prompt (with Run As Administrator), and go to Java bin directory location copied above

18.JPG

 

   4.   Execute the command keytool.exe -importcert -keystore "C:\Program Files\SAP\hdbstudio_Rev93\plugins\com.sap.ide.sapjvm.jre.win32.x86_64_81.0.0\jre\lib\security\cacerts" -alias HANServer -file c:\temp\trust.pem

19.JPG

   5.   Enter the keystore password and the default password for the Java keystore is “changeit”. Once the password is entered, and the certificate details will be shown. Enter “yes” to trust the certificate

20.JPG

   6.   Now the Certificate would be added to the keystore

21.JPG

 

Enable SSL Communication:


  1. Close HANA Studio(if it’s opened already)
  2. Open the HANA Studio and go to Administrator’s perspective, right click and add the HANA system (MK2 in our case)
  3. Enable “Connect using SSL”, in the Connection Properties dialog and click Finish

22.JPG

   4.   Now hover the added HANA(MK2) system, you will observe a small lock on the system along with SSL indication in the tooltip as shown below

23.jpg

 

Now the SSL has been configured between HANA Server and HANA Studio and the communication is secured.

 

Hope this helps.

 

Rgds,

Murali


SAP HANA for Enterprise Architects – SAP HANA and Data Center Readiness

$
0
0

The Journey Continues - Episode 7 of 10

 

Is SAP HANA ready for your data center? How are you going to architect it so that it fits in? These are questions for Enterprise Architects that were answered in this installment of the webcast series SAP HANA for Enterprise Architects. The webcast speaker was Ralf Czekalla,  Product Manager SAP, who delivered a very detailed presentation on just some of the data center readiness story.

 

WARNING: This is a deep dive presentation so you may need your acronym dictionary as we cover topics like HA and DR, RPO/RTO, VM, TDI etc.

 

In past webcasts we have had speakers talk about SAP HANA Cloud Platform and HANA Cloud integration.  This presentation was mainly about running SAP HANA on premises. I know there are many customers out there with SAP HANA appliances that are three years old or older. What new options are there out there for backup and recovery? There are many more options for you to consider when it comes time to renew the hardware that powers your SAP HANA implementation. SAP has come a long way!

 

** As a side note, this webinar comes with one of the most extensive slide decks I have every seen. You will definitely want to download the presentation materials, as we did not have near enough time to cover the depth of material.

 

In approaching the topic of Data Center Readiness, Ralf, divided it up into the following areas:

Weblog7pic2.png

 

All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.

 

I will attempt to highlight some of the things that I found interesting in the presentation.

Through out the webcast, Ralf spoke to some of the historical points in the development of SAP HANA as well as the roadmap and options going forward. You will see copious references to SAP Technical Notes throughout the presentation to give you links to the source documents for reference. This is a point in time presentation and many of the slides focus on the SPS 10 release of SAP HANA.

 

As groundwork for the presentation, Ralf covered some of the existing deployment methods of SAP HANA. This set the stage for talking about many of the aspects of design and setup like multi-tenancy, performance criteria and single instance vs. scale out architectures.

 

From the webcast, I think one of the under utilized solutions for SAP HANA is the Tailored Data Center Integration (TDI). Many organizations have bought into high-end converged infrastructure and now find that it is underutilized. Running SAP HANA on your existing data center hardware is a way of cutting costs and using what you have. Your hardware still needs to meet the specifications that SAP sets, but in these challenging economic times, it is another option.

Weblog7pic3.png

 

An early-perceived weakness of SAP HANA was the backup and recovery options that were available at launch. It was great to see that the internal capabilities have been updated as well as support for many vendor tools to handle this task. There are many options that Enterprise Architects can include that take into account their existing backup tools or environment.

 

One of the many gems in the presentation was the discussion around virtualization using VMware. Ralf had a great slide that spoke to the pros and cons of using VMware vs. bare metal.

WEblog7pic4.png

 

From the slide we see that there are performance impacts across different SAP HANA tasks. You really need to know what your application is doing in SAP HANA to determine if VMware is the right fit.

 

Further into the presentation it was obvious that there was not enough time to cover all of the agenda and so details on topics like Monitoring & Administration and Security & Auditing were left for a future webcast.


Ralf included lots of links to external content that you should check out in the slide deck. Here is one on backups and recovery:

Weblog7pic6.png

 

I think some of the highlights that I took away from the presentation were that SAP HANA has matured over the last few years. Initially it was very weak in deployment, disaster recovery and back options. Now there are many different solution possibilities based on you performance, availability and management needs.

 

This webcast was a whirlwind tour of SAP HANA in the data center. The speaker discussed many different aspects that you need to consider as you build out solutions. Review the slide deck for more information; the content is all there.

 

To view the webcast:

http://event.on24.com/wcc/r/1019533/CC631232C76CC955D8A2F540AF299AF5

 

The PDF file of the presentation with over 190 slides is found here: https://www.asug.com/discussions/docs/DOC-42296

 

A few of the webcast attendee key takeaway comments:

  • zero-downtime maintenance - very impressive!
  • DR, Backup and Recovery, High availability
  • TDI enabling External/Corp Storage.
  • Key considerations or HA, DR and backups.
  • SAP is continuously updating their strategy and product capability

 

In the next webcast scheduled for September 29th, the speaker will be covering “Why SAP HANA, Why Now and How?”

 

Complete Webcast Series Details https://www.asug.com/hana-for-ea

 

All webcasts occur at 12:00 p.m. - 1:00 p.m. ET on the days below. Click on the links to see the abstracts and register for each individual webcast.

September 29, 2015: Why SAP HANA, Why Now and How?

October 6, 2015: Implications of Introducing SAP HANA Into Your Environment

October 13, 2015: Internet of Things and SAP HANA for Business


Why Size Matters and why it really Matters for HANA (Part II)

$
0
0

Introduction

Part I: Why Size matters, and why it really matters for SoH (Part I)

In the first part of this blog I described some reasons why you want to try and keep your database small.

Apart from cost there are also some compelling technical reasons why you will eventually come to a hard stop
in terms of database growth. I.e. the limit of technology today.
The biggest x86 systems on the market today (and certified) are currently 16 socket 12 TB nodes, with one vendor offering
a potential 32 socket 24TB node.

 

With future x86 advancements (Broadwell and beyond) SAP may release higher socket/Memory ratios (i.e. use of 64GB Dimms),
but for the time being we are limited to

  • 2XSocket           1.5 TB
  • 4XSocket           3    TB
  • 8XSocket           6    TB
  • 16XSocket         12  TB

 

 

Take a look at what you store in your Database

 

When you look at your existing Business Suite database, what are your high growth areas?

 

  • Do you store attachments in your DB, i.e. PDFs/jpgs/Word docs/Engineering drawings?
  • Do you use workflow heavily?
  • Do you rely on application logs?
  • Do you keep your processed IDOCs in the DB?
  • Do you generate temporary data and store it?
  • Do you keep all the technical logs/data that SAP produces on a constant basis?

 

 

The above does not even look at business relevant data or retention policies.

 

You will be surprised at how much data is stored in your DB that your users never use or very infrequently.

Does this data really belong in your Business critical application that should be running at peak performance all of the time?

 

Probably not but there are valid (and invalid) reasons why it is stored in there.

 

Attachments

 

Lets take attachments as an example.

 

Think back when your SAP system was first implemented. There were probably budget and time constraints, made all the worse
because of project over runs.
That great design your solution/technical architect came up with, using an external content/document server, that required a separate
disk array, server and Document server license, was likely have been torn up as SAP provided a local table just for purpose of storing attachments.

 

The architect lost the argument and the data was stored locally (Yes I have been there).

 

This scenario actually has two consequences.

a) Resulting in a large database (I have seen the relevant tables grow to above 2TB)

b) Slow performance for the end user, as you have to access the database, load the relevant object into DB memory, then into the application Server Memory
    before it is shown to the user.

 

With a remote document store, the user is passed a url pointing directly to the relevant object in the document store, bypassing the application servers and DB server at the same time reducing the load on the server network.

 

 

Workflow

 

One a workflow is completed, does it really need to sit in the users inbox? Yes in some industries, e.g. aerospace I can imagine you need to keep a record of all workflows, but do they need to be stored online? Would it not be more secure to store them in a write once/read many archive where the records cannot be edited?

Again I have seen workflow tables so large (and over engineered) that the SAP Archive process can't even keep up with the rate of creation.

 

Application Logs

 

Again how long do you need to keep these? Is there a compliance reason, again what is the maximum amount of time these logs are actually relevant.

 

IDOCS and other transient objects.

 

Once an IDOC is successfully processed the data will already have been loaded into the relevant tables. After than the data loaded into the IDOC tables in most cases is pretty much irrelevant, if you have to keep the IDOC data, store the incoming/outgoing IDOC/XML file. Large IDOC tables can cause significant performance issues for interfaces.

Is there any other temporary data you create that is truly transient in nature? Consider if you really need to keep it.

 

Logs

SAP can create various logs in the database at an alarming rate. I've seen DBTABLOG (log of all customizing changes) at 1TB, SE16N_CD_DATA (a log of data deleted via SE16) at 100GB (what are you doing deleting data via SE16 anyway?!?!?!)

 

Business Data Retention Periods

 

This is the hardest nut to crack. As stated in Part I, disk is cheap. Getting the business to agree on retention periods was nigh on impossible and a battle the poor suffering OPS guys/gals would retreat from.

With In-Memory databases this is a battle line that will need to be redrawn. As stated in the introduction, there are technical limits as to how far your database can grow without suffering severe performance degradation or costs will increase an order of magnitude more than they did with disk based technologies.

 

Hard questions have to be asked.

 

  • Why do you have to keep the data online?
  • At what point does your data become inactive?
  • Once inactive will you need to change it?
  • Is the reason for Legal/Compliance reasons or just because somebody said they want all data online?
  • If this inactive data is only going to be used for analysis, would it not be better storing it elsewhere in a summarized form? (this is one of the reasons why BW will not die for a while)

 

One area where users complain about Archiving, is that they have to use a different transaction to get at archived data. You may have a counter argument now.
With the journey to SAP HANA you may well be considering Fiori. A complete change in User Interface, so the user has to re-train anyway, so it becomes a moot point.

 

Summary

 

I realize I have not talked much about HANA in this part. Old hats like me would have heard the above again and again in regards to traditional databases. We have often lost the argument or maybe even just thrown disk at the problem rather than getting into the argument in the first place.

 

With In-Memory databases, a jump from one particular CPU/Memory configuration to another can be a doubling in price, rather than a linear increase with disk based databases.

 

If your In Memory database is so big that it reaches the limits of current technologies, you may be in big trouble. An emergency archiving project is always nasty. It will be political. Your system can crawl as you frantically use all available resources to offload data, and the end-users will complain about new transactions they have to use as the change will be forced upon them.

Playing with Images – BLOB data in SAP HANA !!

$
0
0

Hello Everyone,

 

In this blog let us see how we can bind dynamic images (i.e. based on user input) to SAPUI5 Image control.Let’s take an example of storing images of  100 employees and then displaying it as their profile pic based on employee id.

 

Firstly you need to process the cool images and store it in HANA!! Now how do we do that?? There are many ways to do this e.g. using  python,java,etc but I choose the JAVA way to store it as BLOB in HANA Table.. BLOB datatype can store images/audio/video up to 2GB.

 

Below is the code snippet for opening an image file, processing it and storing it in HANA table. Place all your image files in a folder(eg. C:\\Pictures).


public class ImageOnHana  {      public static final String hanaURL = "jdbc:sap://<hostname>:3<instance>15/";      public static final String hanaUser = "AVIR11";      public static final String hanaPassword = "ABCD1234";      public static final String pics = "C:\\Pictures";      public static void main(String[] args) throws IOException, SQLException, ClassNotFoundException {      Class.forName("com.sap.db.jdbc.Driver");      Connection conn = DriverManager.getConnection(hanaURL,hanaUser,hanaPassword); //Open HDB Connection      conn.setAutoCommit(false);      String query = "INSERT INTO \"AVIR11\".\"EMP_IMAGES\" VALUES(?,?)";      PreparedStatement pstmt = conn.prepareStatement(query);      File folder = new File(pics);      File[] images = folder.listFiles();      System.out.println("*****OPEN FILES NOW****");      try {            if (images != null) {                for (File image : images) {                  String imgName = image.getName();                  FileInputStream fis = new FileInputStream(image);                  pstmt = conn.prepareStatement(query);                  String[] parts = imgName.toUpperCase().split(".JPG");                  String id = parts[0];                  pstmt.setInt(1, Integer.parseInt(id));                  pstmt.setBinaryStream(2, fis, (int) image.length());                  pstmt.executeUpdate();                  conn.commit();                  System.out.println(imgName + " image upload to HANA successful");                }             }       } catch (Exception e) {            e.printStackTrace();      }      }
}

Row inserted  - “AVIR11”.”EMP_IMAGES”.Column IMAGE with BLOB datatype

Blog_pic.jpg

For providing this image to the UI lets create a XSJS service that would process the blob data from table. Make sure that the content-type is set to image/jpg.

 

var empId = $.request.parameters.get("empId");
var conn = $.db.getConnection();
try {    var query = "SELECT IMAGE FROM \"AVIR11\".\"EMP_IMAGES\" WHERE ID = ?";    var pstmt = conn.prepareStatement(query);    pstmt.setInteger(1,parseInt(empId));    var rs = pstmt.executeQuery();    if(rs.next()){        $.response.headers.set("Content-Disposition", "Content-Disposition: attachment; filename=image.jpg");        $.response.contentType = 'image/jpg';        $.response.setBody(rs.getBlob(1));    }
} catch (e) {
}                       




conn.close();

Note : Odata does not support BLOB datatype, hence couldn't send the response in Odata.

 

Done!! We are good to go and integrate this service to the UI5 image control !!

<Image src="http://<hostname>:8000/avinash/services/XJ_Emp_Images.xsjs?empId=1"       width="100%" height="150px">      <layoutData><l:GridData span=”” linebreakL=””/></layoutData>            </Image>

Above view.xml snippet shows hardcoded/specific Employee ID. For dynamic Employee id set  <Image id=”image> and refer this id in your controller.xml for setting the source.

 

byId("image").setSrc("http://<hostname>:8000/avinash/services/XJ_Emp_Images.xsjs?empId="+employeeId+"");

Blog_pic2.jpg

Voilà my fav star pic for my Employee Id !!

 

If your scenario is to Upload a file from the UI using an Upload button you can use the SAPUI5 FileUploader control and use XSJS to get the entities The later processing and UI image binding remains the same as above..


Happy Learning !!


Avinash Raju

SAP HANA Consultant

[System Replication] end-to-end Client Reconnect

$
0
0

I've seen many posts on how to setup Hana System Replication and its takeover, however, none or few of the post that covers client reconnect after sr_takeover.

 

In order to ensure the client is able to find seamlessly the active HDB node (doesn't matter primary or secondary), we can either use IP Redirection or DNS Redirection. In this blog, i'll emphasize on simple IP redirection as it is much easier, faster and less dependancies compare to DNS redirection.

 

For details info on IP and DNS redirection, please refer to the guide:

 

http://scn.sap.com/docs/DOC-63221

Introduction to High Availability for SAP HANA

How to Perform System Replication for SAP HANA

 

 

First of all, we need to identify a virtual hostname/ip, create them in your DNS. Below is the sample virtual hostname/ip and physical hostname/ip used:

 

Virtual IP/Hostname: [10.X.X.50 / hanatest]

Primary Physical IP/Hostname: 10.X.X.20 / primary1

Secondary Physical IP/Hostname: 10.X.X.21 / secondary2

 

In normal operation, [10.X.X.50 / hanatest] is bind to Primary Physical Host - primary1

SAP instances, HTTP, BO, SAP DS and etc are connect to HDB via [10.X.X.50 / hanatest]

During any unplanned outage/ Disaster, [10.X.X.50 / hanatest] will be unbind from primary host and bind to Secondary Physical Host.

 

And below are the steps on mapping virtual IP [10.X.X.50] to its MAC address in Linux:

 

1) Bind virtual ip (10.X.X.50) to Primary Physical Host

 

primary1:/etc/init.d # ifconfig eth0:0 10.X.X.50 netmask 255.255.255.0 broadcast 10.X.X.255 up


2) Check eth0 entry:

 

primary1:~ # ifconfig

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C2

          inet addr:10.XX.XX.21  Bcast:10.XX.XX.255  Mask:255.255.255.0

         

eth0:0 Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C2

inet addr:10.XX.XX.50  Bcast:10.XX.XX.255  Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1

 

3) Ping hanatest and it is resolvable.

 

PING hanatest (10.XX.XX.50) 56(84) bytes of data.

64 bytes from hanatest (10.XX.XX.50): icmp_seq=1 ttl=64 time=0.028 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=2 ttl=64 time=0.038 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=3 ttl=64 time=0.024 ms

 

4) For all HDBs' clients, connect using virtual hostname [hanatest]

 

a) SAP - hdbuserstore:

 

sidadm 52> hdbuserstore list

DATA FILE       : /home/sidadm/.hdb/XX/SSFS_HDB.DAT

 

KEY DEFAULT

  ENV : hanatest:30515

  USER: SAPSID

 

Login to SAP, and you'll see DBHOST is pointed to primary1

 

In DBACOCKPIT -> DB CONNECTION -> Ensure virtual host is used:

 

b) HANA Studio:

Sevices are running on Physical Host primary1

 

c) ODBC - connect using virtula host

 

 

d) HTTP - xsengine

http://hanatest:8005/

 

 

http://hanatest:8005/sap/hana/xs/admin

 

 

------------------Unplanned outage *DISASTER*:--------------------------------------------

 

During Disaster. we will:

 

i) Ensure primary HDB is down and not accessibile to avoid any split-brain

ii) Unbind virtual ip [10.X.X.50] currently binding to Primary Physical Host. in ifconfig, eth0:0 should not visible after you execute below command.

 

primary1:~ # ifconfig eth0:0 10.XX.XX.50 down


iii) clear ARP cache in client [optional]

 

iv) initiate -sr_takeover and wait HDB on secondary to be up and ready

 

v) Once HDB on secondary host is up and running, bind virtual ip [10.X.X.50] to Secondary Physical Host


secondary2:/etc/init.d # ifconfig eth0:0 10.XX.XX.50 netmask 255.255.255.0 broadcast 10.160.69.255 up


secondary2:~ # ifconfig

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C8

          inet addr:10.XX.XX.21  Bcast:10.XX.XX.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1

       

eth0:0 Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C8

inet addr:10.XX.XX.50  Bcast:10.XX.XX.255  Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1 10)

 

vi) Ping hanatest and it is resolvable. virtual host [hanatest] is currently bind to Secondary Physical Host - secondary2

 

PING hanatest (10.XX.XX.50) 56(84) bytes of data.

64 bytes from hanatest (10.XX.XX.50): icmp_seq=1 ttl=64 time=0.028 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=2 ttl=64 time=0.038 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=3 ttl=64 time=0.024 ms

 

----------------------- End-to-End Client Reconnect Verification -----------------------------

once done, you can perform end-to-end client reconnect verification without the need to perform any changes.

 

a) SAP Instances after sr_takeover and running on secondary host:

 

a.i) Developer trace – SAP reconnect ok to secondary host:secondary2

B Connection 1 opened (DBSL handle 1)

B successfully reconnected to connection 1

B ***LOG BYY=> work process left reconnect status [dblink       2158]

M ThHdlReconnect: reconnect o.k.

M

M Tue Sep 29 13:50:13 2015

M ThSick: rdisp/system_needs_spool = false

C FDA DB protocol version from connection 0 = 1

 

B Tue Sep 29 13:55:16 2015

B Connect to XXX as system with hanatest:30515

C Try to connect as system/<pwd>@hanatest:30515 on connection 1 ...

C

C Tue Sep 29 13:55:17 2015

C Attach to HDB : 1.00.095.00.1429086950 (fa/newdb100_rel)

C fa/newdb100_rel : build_weekstone=0000.00.0

C fa/newdb100_rel : build_time=2015-04-15 10:44:35

C Database release is HDB 1.00.095.00.1429086950

C INFO : Database 'TST/05' instance is running on 'secondary2'

C INFO : Connect to DB as 'SYSTEM', connection_id=300064

 

a.ii) SAP status (HDB switched from primary1 -> secondary2)

 

b) HANA Studio

 

c) ODBC

 

 

 

d) HTTP

 

xsengine: http://hanatest:8005/

 

http://hanatest:8005/sap/hana/xs/admin

 

 

Hopefully this blog will serve as a reference for client reconnect strategy when setting up Hana system replication. Also, hopefully more consultant are aware of the three execellent guides above, which provided detailed info on client reconnect mechanism and hana system replication.

 

Cheers,

Nicholas Chang


How to achieve Zero or near-Zero Downtime for DB-failover using SAP HANA System Replication

$
0
0

For quite sometimes, I was working with the team on SAP HANA System Replication. This is mainly focused for CRM on HANA or SoH HA/DR POC.

CRM on HANA is a scale-up solution – for HA part, we prefer SAP HANA System Replication within the same Datacenter whereas for DR, we leverage storage replication across Datacenters.

There are two aspects:

- SAP HANA System Replication setup/failover Testing

- CRM HA : Extend SAP HANA System Replication as a HA solution for CRM

Due to business criticality, CRM system HA failover should be Auto-Failover with zero data loss.

 

There are some technical points in this regard -

SAP HANA System Replication is primarily a Disaster Tolerance (DT) / Disaster Recovery (DR) Solution and NOT a full-fledged HA solution.

• HANA System Replication is NOT Host Auto-Failover

• HANA System Replication synchronizes data between two data centers (Site A and Site B)

• HANA System Replication works only for Scale Up

 

In this blog, I will discuss about SAP HANA System Replication – possibility to make it as automated failover. But I will not touch how to setup the systems to perform SAP HANA System Replication.

 

My recommendation for the above as follows – which is the best solution in industry as of today:

Combination of SUSE Linux Enterprise High Availability Extension Cluster (SLES HAE) with SAP HANA System Replication. But as on date, SLES HAE is taking care of HANA Database, it is not fully SAP Application-aware.

 

Without SLES HAE,

Yes, HANA System Replication can be used as HA solution if the connections from database clients that were configured to reach the primary system, and need to be "diverted" to the secondary system after a failover with an automatic way via IP redirection, DNS redirection, etc. along-with SAP HANA Service Auto-Restart watchdog function. But again, we have to take care Host Auto-failover functionality.

Remember, in this way, SAP HANA System Replication can be used as main HA failover for zero or near-zero downtime maintenance or failures.

 

Pre-requisite/Assumption :

- SAP HANA System Replication is already configured as per SAP standard guide.

- DB Takeover is happening from Primary to Secondary Node in perfect manner.

- People/Team having required skill-set and proper access, authorization to perform the activity.

 

Preparation at ABAP Application Server :

- Set greater value for rdisp/max_wprun_time from its default value of 300 seconds. It should be greater than DB Takeover process from Primary to Secondary node.

- Set the parameter rdisp/wp_auto_restart = 0

- Set the parameter dbs/hdb/quiesce_check_enable to "1" (default value is 0).

 

Just before the Takeover, we have to create a file named "hdb_quiesce.dat" using touch command in the DIR_GLOBAL directory (i.e., /usr/sap/<SAP_SID>/SYS/global).

This will suspend the connection between the application server and database server (Primary node, in this case), one can check via R3trans command.

Newly started ABAP processes do not open a connection to the database until the file is removed. Although SAP Application using the dynamic profile parameter dbs/hdb/quiesce_sleeptime (default

value is 5sec.), checks whether the file named "hdb_quiesce.dat" still exists in the DIR_GLOBAL directory. So, when Secondary DB node is fully active, one can check via R3trans command – if it is successful, then we have to remove the "hdb_quiesce.dat" file. Now Application can connect to HANA Database but actually to the Secondary node. Also one can reset the parameters value as the activity is over. 

 

But during the above DB Takeover process, we have to make necessary changes for Secondary DB Node as the default DB node for the SAP Application. Required IP Address change and restart of network services should be performed via Scripts to avoid confusion/errors.

 

Little bit complicated, not able to understand fully? For that reason, I have created a flow chart.


Deb_HANASR_Flowchart.jpg

Flowchart for Host Auto-Failover while using SAP HANA System Replication


Hope it is clear now.

 

We have tested the whole scenario for few times and worked fine in all the cases.

 

There are some restrictions as follows, which need to be considered :

- Long-running database transactions like background jobs, etc. are not interrupted during this activity.

- Here, Application to Database connection is closed or suspended. External connections, e.g. connection between this HANA system and SAP Solution Manager System, are not interrupted.

- This activity is only applicable for ABAP application server. Database connections from the Java stack are not interrupted.

 

BTW, as the connection from Solution Manager is alive during the activity, one can leverage auto-reaction method along with scripts to perform whole scenario. And we have tested that in our environment also and worked in smooth manner.

 

For more details, consult SAP Note 1913302 - HANA: Suspend DB connections for short maintenance tasks.


SAP HANA Helps Humanity: New Members Support 2 Great Causes

$
0
0

Invite colleagues to join the SAP HANA International Focus Group!

https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/syOrhTxxhiKQFfW08It4oM/thumbnail?max_x=850&max_y=850&version_id=5065197To celebrate SAP’s signature corporate volunteer initiativeSAP's October Month of Service (MOS), a campaign is being held to invite (& confirm) the next 1200 members of  the SAP HANA international Focus Group (iFG) consisting  of customers, partners, and experts focused exclusively on SAP HANA implementation and adoption. We need your help to achieve this goal!

Here’s how it works (see below for more details):

  1. INVITE: The iFG team will donate $1 per new member to two great causes: “Doctors without Borders” and “The Hope Foundation” (India) for up to 1200 new members confirmed.
  2. REGISTER:  Forward this link to colleagues who value SAP HANA – www.saphanacommunity.com Encourage them to support a great cause and the benefits of membership!
  3. SOCIAL:  Tweet or e-mail your own message or share “Join the #SAPHANA iFG community. Help us reach our goal of 1200 new members in 30 days. Visit >>  www.saphanacommunity.com
  4. WATCH:  Current and new members can track progress by visiting the SAP HANA iFG Jam group and seeing the number of members in the upper left hand corner grow.


SAP HANA Helps Humanity
As SAP HANA is a strategic initiative for your organization, SAP “approaches corporate social responsibility (CSR) strategically – in order to ensure a sustainable future for society, our customers, and our company. By focusing our talent, technology, and capital on education and entrepreneurship, we strive to enact positive social change through economic growth, job creation, innovation, and community.”

 

The SAP HANA iFG selected these two organizations based on their great teaming with SAP, customers, and partners around the globe and the synergy with charities selected for the SAP HANA Innovation Awards 2015.  We want a fun way to grow the community and make social impact during the Month of Service!

 

Thank you for your consideration to invite your SAP HANA colleagues to the SAP HANA iFG community and join us as HANA Helps Humanity.  This initiative will last from October 5 to November 5; we hope to surpass our goal!


https://jam4.sapjam.com/profile/uit7WY0ZrikCVja7RvnWkZ/documents/ByB0bv7fJVCWaug7TBYm16/thumbnail?max_x=850&max_y=850&version_id=4477096Click HERE if you’re already a member.  If not, click here for an invitation to join or email saphanacommunity@sap.com  if you have any questions!

---

Background Information:


https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/4YfucYSJtcsK3LUDelmzGI/thumbnail?max_x=850&max_y=850&version_id=5065194The Hope Foundation (India)
HOPE foundation works to bring about change in the lives of children, young people and vulnerable individuals. They educate children, provide healthcare and train young people and women in skills for livelihoods. Their team of 550 people and many more volunteers and partners work in 26 cities in India through over 100 programs and community-based services. Their mission is to bring hope to those with none and change the lives of everyone they work with, including their staff, donors, volunteers and partners. http://www.hopefoundation.org.in/


https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/I9MNaQoLlkdZbhopqQmuYM/thumbnail?max_x=850&max_y=850&version_id=5065192Doctors without Borders

Help people worldwide where the need is greatest, delivering emergency medical aid to people affected by conflict, epidemics, disasters or exclusion from health care. http://www.doctorswithoutborders.org/about-us

 


https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/cs8QaLQVzc2xG62PnRiDy4/thumbnail?max_x=850&max_y=850&version_id=5065198Joining the SAP HANA International Focus Group (iFG) Jam Community!

This exclusive community provides a single, central global location for unique SAP HANA updates only available to our members
.

Benefits include:

  • Access to private & selected webinars with SAP HANA Experts
  • On-demand recordings and slides from many popular topics (i.e. HANA SPS10, Dynamic Tiering, Modeling, Hadoop Integration, etc.)
  • SAP TechEd updates / sessions specific to SAP HANA
  • Early access to SAP HANA related product updates
  • Unprecedented global networking around SAP HANA topics
  • Insights from SAP HANA experts from around the world
  • 1 free ticket to a major SAP conference to the first 10 customers who agree to a 1 hour HANA Spotlight webcast.

 

 

 

 

 

 

 

 

 

SAP TechEd 2015 Las Vegas - ITM228 - SAP HANA: Overview of On-Premise Deployment Options

$
0
0

Few months back I was offered opportunity to speak at SAP TechEd 2015 in Las Vegas. First thing that crossed my mind was the question on what is the right subject to cover. Since I did not want to present just for sake of presenting there I had to find some topic that will not be redundant to the presentations from SAP, that will address area that is not completely clear to everyone (where I can bring additional value), that will be within the scope of my expertise and that will be seen as attractive by SAP and AGUS sponsoring the event.

 

I was lucky to have opportunity to get my hands on SAP HANA technology no more than just few weeks after SAP HANA was released to the market in general availability. After initial period where I was experimenting with different job roles around SAP HANA (being responsible for installation and configuration, designing security concept, configuring data provisioning, doing modeling, etc.) I decided to settle down on subject that is probably closest to my heart - SAP HANA architecture, infrastructure and deployment options - and this is the topic that I selected for my presentation this year - to talk about on-premise deployment options for SAP HANA.

 

You might wonder if on-premise discussion is still relevant when we are able to host SAP HANA in cloud. Answer is yes. First reason is simple fact that there are still customers that are not yet fully embracing cloud and they are still looking at options how to deploy SAP HANA in their own data centers. Second reason is that cloud vendors need to follow same rules as everyone else to ensure that result will be SAP certified - this means that their cloud solutions are based on similar principles as on-premise deployment. Understanding advantages and disadvantages of individual on-premise deployment options can help you to understand the limitations of individual cloud offerings.

 

Topic of SAP HANA deployment options is already covered quite well by SAP - is there anything new to offer? I believe that yes. SAP is doing great job by opening SAP HANA options by introducing topics like TDI (Tailored Datacenter Integration) and virtualization - but since they do not wish to give up on their commitment to deliver only the best performance they always release new set of regulatory rules prescribing configuration details. Result is that today there are many different options how SAP HANA could be deployed - appliances, TDI, virtualization, application stacking (MCOD, MCOS, MDC) - but it is incredibly difficult to stay clear on what are the regulations (and limitations) and which options can be combined together.

 

And this is where I decided to approach the subject from different angle. SAP is typically focusing on individual options in detail usually covering one option at a time - looking at simplistic examples to illustrate the approach. Here I intend to do exactly the opposite - first to briefly look on individual options from extreme point of view (how far we could potentially go) and then to outline how all these options could be combined together.

 

As you can see the subject is quite huge - and since I was given only 1 hour (which is the standard time allocated for ASUG sessions) I had to make tough selection on what will be presented and what not. Therefore I decided to move SAP HANA Business Continuity to a separate session (EXP27127) and also to leave some topics like SAP HANA Dynamic Tiering for another time.

 

So what will be covered in the ITM228 session? We will start by looking at situation with appliances - providing basic overview on different models across all hardware vendors, then we will look at SAP HANA Tailored Datacenter Integration (TDI) with all phases and approved options, we will review SAP HANA virtualization options with focus on VMware, then we will mention ways how to stack data from multiple applications on single SAP HANA server or virtual machine (MCOD, MCOS, MDC) and at the end we will look on ways how to combine all these options together - what everything is supported versus what combinations should be avoided.

 

In SAP HANA Business Continuity session (EXP27127) we will take a closer look on two most typical options - SAP HANA Host Auto-Failover and SAP HANA System Replication. I prepared animation illustrating how the options are designed to work and how SAP HANA is behaving during take-over. At the end of the session we will outline most typical deployment scenarios for SAP HANA Business Continuity.

 

On the screenshots below you can find example of the content that will be presented during the sessions. By this I would like to invite you to my sessions (ITM228 and EXP27127) and I am looking forward to meet you in person at SAP TechEd 2015 Las Vegas. Have a safe travel.

Selection_999(3811).png

     Example 1: [ITM228] Overview of available appliance models and their usage.

Selection_999(3812).png

     Example 2: [ITM228] Visualization of SAP HANA stacking options and their approved usage for production.

Selection_999(3813).png

     Example 3: [EXP27127] Overview of typical single-node SAP HANA Business Continuity deployment options.

 

I would like to express big thanks to Jan Teichmann (SAP), Ralf Czekalla (SAP), Erik Rieger (VMware), John Appleby (Bluefin) for reviewing the slide deck and providing suggestions for improvement.

SAP HANA for Enterprise Architects – Why HANA, Why Now and How to Start

$
0
0

The Journey Continues - Episode 8 of 10

 

Quite often Enterprise Architects (EAs) need to work within the framework of what their CIO’s give them.  EA’s are challenged to provide innovation and value while cutting costs and simplifying. This week’s webinar had Mike Bell, Strategic Engagement Executive, on the call who spoke to us from the standpoint of what CIO’s expect. It was great to get a real word perspective based on years of experience as CIO. Mike has been there, through the invention and rollout of SAP from the customer point of view.

 

To start with, Mike presented the challenge that every Enterprise Architect has to deal with – the Cost to Value Challenge. How can EAs support CIOs in balancing systems risk, cost and time to value? The rest of the presentation dealt with these areas and how SAP HANA can help. Mike illustrated his journey as a CIO as he worked with SAP at the time when SAP HANA was an emerging technology.

 

Why HANA?

Many CIOs have concerns about moving into the SAP HANA technology. It is new technology, and they may not have any experience on the benefits an organization can achieve. In the webcast, the speaker took us through a journey from a vision of what his previous firm wanted, as an SAP Customer, to what could eventually be done. This involved asking breakthrough questions in 2011 like; ”When can we take an 85 terabyte ERP instance and compact it down into one box, and thousands of servers down to three datacenters?”

 

To help answer these questions, Mike discussed the Gartner PACE Layering methodology. Many enterprise architects are familiar with the Gartner PACE application layering model. It attempts to place all applications in one of three designations:

Weblog 8 Pic 1.png

All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.

 

This was presented with an interesting SAP overlay to show where the various SAP technologies fit in this framework:

Weblog 8 Pic 2.png

 

One additional layer was then proposed to expand this framework by introducing the concept of Systems of Discovery. SAP HANA gives you the ability to have “Systems of Discovery”.

 

From Mike’s experience, he talked about the impact of batch runs for reports and what happens when 48,000 batch jobs don’t go out. It has a real business impact. You can do business differently if you don’t have to worry about batch jobs.

 

Why Now?

One example shown in the presentation was the way a retailer can take advantage of agile pricing so they can re-price during the day to move product and reduce waste. Normally, price is an unknown and manual process, with the speed SAP HANA, you are able to do dynamic pricing optimization.

 

Another way of meeting the Cost to Value challenge is by introducing SAP HANA as a sidecar. In the webcast, we learned a few ways that SAP HANA can be quickly introduced and provide immediate value to the organization.

Weblog 8 Pic 5.png

 

An important point the speaker made was how to deal with the impression that implementing SAP HANA is a binary decision, go SAP HANA or not. This is not the case, you can upgrade your ERP and introduce SAP HANA as a “Sidecar” and take advantage of new capabilities. You can do this in the cloud as well, and have an attractive cost structure as you grow into it.

 

 

How to Start.

Mike proposed implementing SAP HANA as a Sidecar technology to your ERP system so that you can introduce new technology that delivers immediate value. He also proposed that any development activities done through this needed to be delivered in 12 weeks. Why 12 weeks? That is the amount of time he felt you could keep the interest and momentum of the organization and management support for a project.

 

Using the example of rebuilding a house, you don’t knock the house down to build a new foundation, you can paint a room, without rebuilding the house.

Weblog 8 Pic 4.png

 

Towards the end of the presentation, Mike also reviewed the concept of Design Thinking as a process to envision new business capabilities and processes. I liked the quote “Let’s find something real that is valuable right now” as a driver to try the design thinking process.

 

On the call we heard about what a design thinking exercise looked like and how one company arrived at some new business processes that delivered value to the organization. You will want to watch the webcast to see how they went through this process. – And how the outcome delivered £200M of benefit 4 years earlier than originally anticipated.

 

There were many concepts for Enterprise Architects in this webcast that would be worth your while to review - from someone with real world experience. After viewing the webcast, you should be able to answer the questions proposed in the title of the presentation: Why HANA, Why Now and How to Start. The speaker touched on PACE Layering with SAP applications, SAP HANA as a Sidecar implementation, and how Design Thinking can help envision new business processes where SAP HANA can help out.

 

The webcast replay link: http://event.on24.com/wcc/r/1019548/F4C30A43B6FAC57DCEA804FE744A7A18

 

Webcast Materials on ASUG.com: https://www.asug.com/discussions/docs/DOC-42375

 

A few of the webcast attendee key takeaway comments:

  • Pace layering (with SAP Applications).
  • When ECC 6 support goes away, new features will only (be) available on HANA going forward.
  • Think about ECC 6 on HANA like renovating your house, help to build a much better business case.
  • How to run a design thinking process.
  • Design Thinking was new and intriguing to me. I will look to apply that.
  • How to think differently in term of life cycle and maintenance of an organization's application environment - thinking in the 3 Gartner types or 4 Mike Bell classifications of systems necessary to run or drive a business.
  • Radical transformation of thinking when in a HANA environment.
  • Getting to the top part of pace layering (faster).
  • Rethink how Pace Layering can be enhanced with HANA 12 week quick release projects.
  • The business value that can sell Hana.
  • CIOs can get the quick payback on IT investments that they need.
  • SAP HANA can be used to deliver value quickly through sidecar projects.
  • Design Thinking can help with use cases for HANA.
  • I'm not sure that I fully understand the quick value return at the Systems of Innovation level but that is very intriguing and something I will spend time to understand more clearly so I can share that with our Management Team.

 

 

In the next webcast scheduled for October 6th, the speaker will be covering “Implications of Introducing SAP HANA into Your Environment”.

 

Complete Webcast Series Details https://www.asug.com/hana-for-ea

 

The final webcast will occur at 12:00 p.m. - 1:00 p.m. ET

October 13, 2015: Internet of Things and SAP HANA for Business

Run HANA Cloud Connector with Java 1.8

$
0
0

Hana Cloud connector is compatible with Java 1.6 and 1.7

 

Below are the steps I followed to run cloud connector with Java 1.8

 

I did download Hana Cloud connector 2.6.1.1 for windows

 

 

 

And I have Java 1.8 on my system

 

Change go.bat file inside - sapcc-2.6.1.1-windows-x64 folder

Before

 

After

 

Run go.bat

Open https://localhost:8443/

 

 

 

Not sure why go.bat was not given with 1.8 support, as most of the functionality works fine

Viewing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>