There’s a lot of talk about the fact that HANA adoption may be on the up-and-up and SAP’s Q1 numbers were largely buoyed by a transformation in the mindset of IT. The “radical transformation of the industry” statement was made by SAP co-CEO Jim Hagemann Snabe in an interview on CNBC and reported by Forbes.
There should be some concern about this perception of IT’s role in servicing the business though. Conceptually, renting someone else’s infrastructure to run IT operations, the way cloud technology is positioned, isn’t really something terribly new, is it?. Data processing bureaux have been around almost since the first commercially viable computer systems became possible. The nature of the big businesses though, is that they like control and pushing their technology off to a third party provider will not be palatable to many of them. It may be palatable to IT perhaps, but ultimately not very appealing or of much interest to the business. How would IT convince the business to commit to a HANA investment, cloud based or otherwise? The answer has to be simply; higher performance, scalability and lower cost of ownership (TCO). The cloud has the potential for the first two but does it really mean a lower TCO?
SAP’s current incarnation of in-memory technology is something that all the major technology players have fiddled with to some extent. Most significantly though, the recent proposal that businesses move their On Line Transaction Processing (OLTP) systems to full in memory technology seems extraordinarily expensive. Systems that are memory laden are something that Oracle have been pushing for some time with their own appliance offerings and undoubtedly there is some sentiment that the whole HANA initiative is a direct attempt to undermine the existing Oracle database stronghold as the RDBMS supporting SAP ERP but it is much more, it is a genuine acknowledgement that there’s a lot of data in those systems and the current analytics approaches are not performing well enough to address business needs.
I won’t go into the benefits of memory based OLTP versus a hybrid approach of volatile memory, disk caching and disc storage but it is important to understand that there is inevitability to this technology and now is as good a time as any to get on board. Any aspirations that IT might have that they can consolidate all their peripheral systems to a single HANA instance is likely overly ambitious. Consider too, that the way this technology is being positioned is that it presupposes that it can be the best solution for everything i.e. OLTP and On-line Analytical Processing (OLAP). See Martin Klopp’s article on consolidation and some unaudited HANA performance numbers, of particular interest may be the statement that “HANA inserts rows at 1.5M records/sec/core or 120M records/sec per node” – those sound like screaming numbers particularly if you are planning on using a product Winshuttle Transaction with SAP ERP to push batched or pre-staged data into your OLTP system.
The ‘solves all performance issues’ suggestion that IT may be thinking may be a little surprising since many of the use cases hailed as incredibly beneficial have been squarely in the OLAP and analytics space and more particularly, when there is no synchronization or latency issues between the OLTP data being replicated or made accessible to the OLAP technology. For OLTP consumers to reap the benefits of the in-memory technology the expectations are pretty clear. Faster data retrieval, faster decision making and faster storage. In the Farber et al “The SAP HANA Database – An Architecture Overview” paper HANA is succinctly described but it also points out that some of the inherently advantageous characteristics of HANA are in fact the compression schemes which mean less memory consumption and memory bandwidth utilization. Perhaps surprising is the suggestion that real-world OLTP actually doesn’t match the TPC-C profile. This has resulted in SAP’s Benchmark Council taking a different view on transaction processing throughput and how it should be measured.
In their paper, Farber at al suggest that real OLTP workload has larger portions of read operations than standard the TPC-C suggests and this is perhaps particularly true when one considers that changing and updating existing records in a given ERP system is very commonplace.
Those unfamiliar with the TPC-C benchmark may be interested to know something about the TPC. In 1988, the database and application vendors formed a consortium called the Transaction Processing Performance Council’s (TPC). The TPC’s goal was to reduce the bench-marketing hype and smoke being created by hardware, database and application vendors by defining a level playing field on which all vendors could compete and be measured. In 1989 the TPC-A benchmark was born. TPC-A defined metrics for performance in transactions per second (tps)as well as price / performance ($/tps). The TPC-C benchmark soon followed and continues to be a popular yardstick for comparing OLTP performance on various hardware and software configurations.
Wholesale porting of SAP ERP to HANA may not necessarily bring immediate performance benefits without modifications to the underlying application code and this is borne out by a raft of training being made available specifically for developing on HANA platforms that suggests that some of the data retrieval logic may need to be reworked. A question then will be what should one do in terms of a migration strategy if one is considering a HANA based system as the next target for your ERP migration? Should you consider a migration of your existing system as is, and can you in fact do that without coding changes? Or should you rather meticulously plan rework of some of your existing code in preparation for a migration.
SAP’s key event, Sapphire, is coming up in Orlando next week and when you’re having a HANA conversation this is certainly something you will be wanting to ask.