SAP HANA Cloud Platform – custom application deployment … lessons learned


As I have mentioned in a previous post, I finally managed to deploy and run a custom application in SAP HANA Cloud Platform (HCP).

The difficult part for me was not setting up the environment per se, but to solve all the source dependencies and unknowns around the platform’s prerequisites. I found it really challenging to actually find the relevant information on the web, so I decided to write this post to list a set of issues I ran into and how I managed to solve them. Some of the items might not be surprising and for some these will be self-evident, but at least for me these were Aha! moments. I have to add that I am not a real Java expert (I am much more familiar with the Microsoft development ecosystem), so please ignore my ignorance around this technology stack.

Before I start, I also wanted to give a little bit of background: I used Ciber’s tool named Ciber Momentum Engineer (CME) to generate the base application source code. You would be surprised to see how much of the actual source can be generated and the quality of it! The generated Java application is based upon a reference architecture that acts really as a sample architecture. It basically leverages Hibernate for the data abstraction layer, Spring for service execution and batch processing and Angular JS for the presentation layer. One of the great features of CME is that these templates can be highly customized and enhanced … both on a customer level, but also on a project-by-project level.

So, here are my lessons learned:

  1. Java SDK 1.6 or 1.7 are required, I used 1.7
  2. The main framework releases I specified (in Maven’s pom.xml) are as follows:
    • org.springframework.version: 4.2.1
    • org.springframework.security.version: 4.2.0
    • org.springframework.data.version: 1.10.5
    • org.hibernate.version: 4.3.11
    • logback.version: 1.1.8
    • quartz.version: 2.2.1
  3. The Hibernate dialect I used is org.hibernate.dialect.HANARowStoreDialect
  4. HANA does not seem to support IDENTITY type generation of primary key values, so I had to change some of the entities to reflect this, i.e. update annotations
    • from
      • GeneratedValue(strategy = GenerationType.IDENTITY)
    • to
      • @GeneratedValue(strategy = GenerationType.TABLE)
      • or @GeneratedValue(strategy = GenerationType.AUTO)
  5. JDBC data source: the binding from code to database happens through bindings specified in HCP directly. Thus, instead of regular data source definition (URL, username, password) I changed it to use JNDI (Java Naming and Directory Interface) by writing code like this. Please note, that I have manually created the named binding “jdbc/dshana”, by default HCP creates a standard binding “<DEFAULT>” that can be looked up as “jdbc/DefaultDB“.
    • final JndiDataSourceLookup dsLookup = new JndiDataSourceLookup();
      dsLookup.setResourceRef(true);
      DataSource dataSource = dsLookup.getDataSource(“jdbc/dshana“);
  6. As for the data source, I also added a reference in web.xml
    • <resource-ref>
        <res-ref-name>jdbc/dshana</res-ref-name>
        <res-type>javax.sql.DataSource</res-type>
      </resource-ref>
  7. Hibernate required some more properties, i.e.
    • hibernate.hbm2ddl.auto” set to “create
    • and the default scheme being specified, like “hibernate.default_schema” set to “NEO_xxx” (where “NEO_xxx” is the random name of the catalog created by HCP
    • Note: I used Eclipse plugins to connect to the HANA database schema. This plugin then allows you to gather the name of the “NEO_xxx” schema.
  8. The generated source code creates a file for Spring batch configuration. A jobRepository method inside that config file leverages org.springframework.batch.core.repository.support.JobRepositoryFactoryBean to actually create the JobRepository object. The issue with this bean is that it tries to identify (through meta data) the database management system used in the application. This threw an exception since “HDB” is not supported yet by this framework or rather in the version used. I tried newer releases of Spring, but none of them seemed to work or had other dependencies that caused issues. My solution was to implement my own bean representation MyJobRepositoryFactoryBean that extends AbstractJobRepositoryFactoryBean. Just to be pragmatic, I hardcoded the databaseType property to “SYBASE” and used this implementation in the above mentioned Spring batch config.
  9. My logback.xml looked like this
    • <appender name=”STDOUT” class=”ch.qos.logback.core.ConsoleAppender”>
        <encoder>
          <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{50} – %msg%n</pattern>
        </encoder>
      </appender>
      <root level=”INFO”>
        <appender-ref ref=”STDOUT”/>
        <!– <appender-ref ref=”LOGFILE” /> –>
      </root>
  10. Finally, after compiling the source code, I was required to do one last step. At deployment of the application’s WAR-file, the deployment process threw an exception related to SAXParserFactory not being found. I found out that this was related to a duplicate reference to “xml-apis“. Since I am not aware of any other way to handle this, I simply removed the “xml-apis” JAR file from the WAR.

I hope this helps others to jump over the hurdle of deploying an application to HCP. Let me know if you have any questions, comments or concerns.

Leave a comment

Your email address will not be published. Required fields are marked *