Why are we bothering with maintaining a business calendar at all in process management?
Of course the answer isn’t simple. At the most basic level it is because when we do process metrics we want to be sure to not penalize anyone unduly because the weekend had come or there was a public holiday during the flow of a process.
At a more sophisticated level it is in order to provide opportunities to smooth the flow of work and not have weekends and public holidays interfere with the end-to-end process measurement.
What this fails to recognize though is that some people are permanently connected to their work and very attentive around workflow tasks and will approve stuff at all hours of the day.
This means that even if the calendar says that they are booked off work (past siren time) and enjoying their weekend or even on their holidays, if they choose to take action on a workflow task on their phone or tablet or even PC; they are effectively confounding the metrics – worse of all – we don’t typically give them credit for their accelerated performance on those occasions. Shouldn’t we?
If a workflow process redirects something for action like a dispatcher and moves a task to someone else dynamically, based on the calendar then that’s a great advantage and a really good reason to have that calendar influence the metrics, but if the only purpose is to provide a process safety net for performance calculations, then I am not convinced of its merits
A few calendar examples
As a precursor to working in the Middle East I bought a small book years ago entitled ‘Don’t They Know It’s Friday: Cross Cultural Considerations for Business and Life in the Gulf’ by Jeremy Williams. It is a lightweight easy read and for those who work or have worked in one of the Gulf states it is something one can easily relate to.
Working in the Middle East on a Sunday you become particularly aware of the fact that the rest of the world is not working. Correspondence with suppliers, vendors and some customers goes unanswered and you can forget about phoning them.
In this transcontinental business world that we live in, it is not unusual to have business processes serviced by groups in all kinds of geographies and as a consequence when my day starts, the day is already over for people just my side of the international date line and those in Hawaii and the West Coast have long gone home and may possibly have even gone to bed with a full equivalent day still to come.
If my business process relies on any of those people to do something, they’re not going to pick it up until they start their day again. The reality though, is that 5 – 8 hours will have elapsed until they get to the work at hand.
Rethinking the process metric
I think there is an argument for rethinking the process metric and going back to the fundamental question of ‘why do I have a process metric at all?’. The reason I believe we have metrics at all is because we need to demonstrate progress against goals and service levels mutually agreed within the business, to have a good handle on bottlenecks, inefficiencies or problems in the process. This could be related to a task or an individual.
The laggard participant, for example who always takes ages to approve or action things, or the task that is so complex that it has many subtasks and actions that take place outside of the transparency of the workflow.
We want to understand who the laggards are, call them out and try to ascertain why they are delaying the process. We want to understand which tasks are troublesome and look for ways to optimise and streamline.
If we hide behind calendars though we frustrate transparency on other aspects of work that allow people to game the reporting statistics – this becomes really important if you’re paying a premium for a specific service level.
Consider the person who submits the request 5 minutes before the deadline or the person who arrives at work earlier than their colleagues and performs all the work assigned to them in their queue before everyone else even gets started. Does the former get penalized and the latter get rewarded? Is their behaviour transparent in the process metrics?
If our objective is to have a task completed within 24 hours of it being submitted, what is the basis of that 24 hour metric – is that based on Standardized Work? Is it based on some assumptions about down-time, cycle times, smoke breaks, lunch breaks and other delays?
The preferred KPI
In some instances ‘timing’ is factored in. This makes this an ‘adjusted KPI’ – but sometimes, the idea of an adjusted KPI is an accepted value but not the preferred KPI. When we walk up to a cash machine we expect to conclude our transaction in just a couple of minutes. We wouldn’t be very happy if the machine said. Come back in half an hour i will have your money ready then…
The preferred KPI would be, the amount of time it would take if this was a single participant process – that’s your Standardised Work.
This means that if I have to code and post a 1000 line accounting document in the system of record, I should be able to do that in say an hour from the moment I receive it, assuming I have all the data that I need to do the job properly and I am not doing something else the moment I receive it. This also means that in an 8 hour day I should be able to process at least 8. It follows then that if my group receives 240 of these documents per day, then I need 30 people in order to get through the work within the SLA.
In reality though, if I receive all 240 documents in the last hour of the work day, I am not going to get the work done within that hour. You could argue that that same staggering will likely occur tomorrow too, but as we all know, work doesn’t always flow that way. This is one of the reasons why another measure often gets introduced in the form of a business process management metric that tracks cadence or flow and alerts on extraordinary peaks and troughs. Interestingly, these too become subject to the calendar.
A further complication occurs when the period end arises which then overrides all the KPIs and simply states. ‘Here’s the date by which it all needs to be done!’.
So what I would really like to see is the calendar discarded altogether and the acceptance that if a process took 50 hours to complete because a weekend happened in the middle between submission and completion, that was how long it took – we may have to get granular with the classification of the time itself but that’s another matter.
In aggregate it will all smooth out anyway but at least it will eliminate or flag up any work cadence issues. Today that may not be very transparent with the dependence on calendars.
In the end we all have to own the end-to-end quality of the work we do, even when we have to rely on others to help us achieve our goals.
I was quietly minding my own business when one of our sales account managers over at Winshuttle sent me an unsolicited email suggesting that I make myself available for a couple of hours to walk a prospect through a business problem and see how we could solve the challenge with a product or combination of products.
I don’t shy away from challenges but without more context I mentally rolled my eyes and thought, oh boy! The next I saw, was another calendar invitation from a solutions engineer suggesting that I join a ‘discovery’ call with the prospect and dig into what it is exactly that they wanted to be able to do. At this point, you can imagine more eyes rolled heavenward and me wondering what I had let myself in for.
In the world of SAP there are four kinds of people, the know it alls, the know nothings, the know a bit and the fakers. In my time of working with the product I have seen and danced with all four. If I were to classify myself, it would be the ‘know a bit‘ class. I am not downplaying what I know, I just know that there’s an awful lot I don’t know and that chasm grows wider and wider every day or rather the bit I know, seems to be becoming increasingly diminutive the more I learn.
So when you come across someone who has played with SAP for some time, you’re always a little reticent, after-all, the more years of experience the person has with SAP the more likely they are to know some stuff. Play along with me and you will see what I mean.
So on the call I am presented with a problem, a problem that interestingly, appears on SAP’s SCN too!
Dear… how to find in active customers list? ex : i gave given open items days 3 months in ova8 – suppose if i want to know which customers are not paying money for nearly 3 months then how to find inactive customers list or give me any table
Ok so the English is godawful and broken but you get the idea. How do I identify active customers? Or rather, in this case, how do I identify inactive customers. You would think this is an easy thing to ascertain, after-all doesn’t SAP deliver everything? Well if you look in that SCN article, evidently not.
The clue comes in the posting that says
Technically Inactive Customers means the Customers who have done no transactions (Sales / Payments) with business since long time. While what you need is the list of Customoers (sic) who have not made any payments since last three months. For this I believe FBL5N will be a good report for get the necessary details. Secondly you can also approach your FI Consultant. He can pull data of Incoming Payments from relevant tables & get the list of Customers who have paid in last three months. Which will means that all the others Customers have not paid. Also try T-Code FD11 which will give Analysis for Customer’s Account.
And now we come to the critical part, “approach your FI Consultant. He can pull data of Incoming Payments from relevant tables & get the list of Customers“.
This is basically where the conversation with this prospect started.
Effectively he tells us that one of his IT staff took two days to dump the data from the relevant tables and then analyse the data arrive at the answer. How would you do it?
Again, being pretty lazy, I didn’t want to spend too much time working out what exactly I needed so I begged forgiveness and ignorance and asked for the table names and fields from the caller and a day later I received them.
The issue here is that when you build a query of joined tables, there are relationships between the tables and these are generally by some common key, like the customerID, the further challenge is that in certain instances, sales invoices vs. payments from customers, the customerID may be common but the document ID’s in SAP are not. Every payment gets its own document ID even though it may relate to a sales invoice ID. So I started simply enough with joining customer to BSAD and then customer to BSID, because I knew about this document ID challenge I guessed that an outer join might be better . Unfortunately I get to only use one outer join because the Winshuttle Studio Query module only allows one outer join to minimize the impact on system performance.
BSID and BSAD are the tables in SAP with open and cleared items of customer accounts. Think of them as a mashup of the primary accounting data also found in BSEG and BKPF. Again with credit to SAP’s SCN articles
So the main tables of FI document are BKPF and BSEG, if BSEG-KOART (account type) is D, it means it’s customer item, so you can find that item in BSID or BSAD, where:
BSID/BSAD = BKPF + BSEG.
By using BSID and BSAD it is possible to find customer items faster than directly reading BKPF and BSEG, because KUNNR (customer) is not a key field in BKPF or BSEG and these tables can be massive. So you can consider BSID and BSAD as secondary index tables, to improve the performance when you need to pick customer data directly. Additionally, they are transparent tables, not that you should worry too much about this, but that effectively means that as they appear in the dictionary (as tables), so they are in the database – company code and customer number therefore make them really easy to use. You can read more on the other types of tables here – all of which incidentally can be used and joined to with a Winshuttle Query.
So returning to my problem. Joining these tables and extracting them in real-time meant that I could identify the active customers and their last activity but the rest didnt really work, because every event potentially had more than one entry and in the end I wasn’t getting a very satisfactory result.
So I bit the bullet and created a single table query script for each table but then added the sweetener – I linked them together as a chain and bound each query to a different sheet in the workbook.
Ok, so be aware that I am looking at a 1,000,000 row limitation per sheet in Excel, but hey! I am working with IDES data and even the best of our organ grinders can’t make that much stuff in these tables unless they really want to.
My queries ran pretty quickly, less than 30 seconds each, Dumping KNA1, KNB1, BSID, BSAD. I could have joined the first two, after-all I only needed the first one for the customer name but in the end it’s ok it was a quick run. I needed KNB1 because the same customer could appear under different company codes so knowing the valid combinations was important.
All tolled, about 22k customer company code combinations, a few thousand BSID’s and about 30,000 BSAD entries. I extracted the document posting date because that was the key to the last activity.
Now that I had the data, what next ?
Well, this neat little Winshuttle Analytics add-on for Excel kicked in at this point. Converting all of the Excel table extracts to in-memory tables with WS Analytics I now had the ability to perform some ad hoc inquiries.
The first of these was how many unique combinations of company code and customerID were in BSAD ? Next, what was the latest entry in BSID and BSAD for each of those company code customerID combinations. Building the queries took seconds, casting them down the combinations a few seconds more. My next part of the challenge was actually doing the same against the KNB1 table because in the end, I want to identify the dormant accounts. The casting took a little longer, since Winshuttle Analytics is effectively a custom plug-in for Excel and brings the in memory tables with data pivots on steroids to the table, I am still hamstrung by the speed of the Microsoft Excel calculation engine, and sometimes, to be quite frank, it can be a dog. I got there after a few minutes and then added a few more tweaks. I developed a third formula in excel, one that would assign a deletion candidate indicator. This I could use with a transaction recording to flip the record deletion flag in the SAP master record.
The whole exercise took about an hour – pretty good when you compare that with two days!
I showed this to our visitors. I was disappointed that I hadn’t been able to do it all with a query but in the end, it didn’t make so much sense to do it all with a query. The alternative would likely have been a long running ABAP program but I managed it with an hour of thinking and doing and nothing more than a couple of desktop tools and most importantly no changes to my SAP system!
I’ll be the first to admit that my knowledge of SAP security authorizations is lightweight.
I know enough to be dangerous but not enough to be even close to something like a ninja or knowledgeable person. If I am confounded by something invariably I have to read up about it or find a SME to help me.
What I can tell you though, is that in my present and past lives, SAP security authorizations have been the bane of users’, support personnel, and developers’ lives.
I am not entirely sure why this is the case.
My long held suspicion is that this has a lot to do with mistakes, omissions and explicit decisions made over the years with respect to the way solutions and applications have been architected. Unfortunately a holistic view of the security authorization model is often absent among developers and they rely on the auditors and the authorization framework to determine what can and cannot be done safely – it’s not a very good approach and is fraught with risk.
The situation is further compounded by the absence of robust knowledge and subject matter expertise among developers as a whole. The latter is one of the reasons why the annual security and controls audit that many companies are subject to, are an equally painful experience. The auditors come in, run a bunch of scripts, dump a bunch of data to files and then run away with them, returning some weeks later with a list of questions the length of one’s arm. It is something I became quite good at handling but nonetheless something I really hated, a massive time suck.
When we implemented BW and the Portal, admittedly some years ago, we found our ABAP security authorizations model became infinitely more complex. There was security and authorizations that was now specific to these environments too! Then came SAP CRM, and again, authorizations had to be set up for that too. With the growing adoption of HANA and the promise of S4HANA the question will come up as to whether the security model will change and to what extent. It seems for security at least, there’s more complexity coming…
SAP Fiori Security Authorizations
SAP Fiori applications communicate with the ABAP stack through OData services. In addition to ABAP authorization users must be granted authorization to access the HTML 5-based Fiori applications and the OData services in SAP NetWeaver Gateway. SAP Fiori applications require users and roles in SAP NetWeaver Gateway.
A SAP Netweaver Gateway PFCG role contains start authorizations for OData Services. SAP doesn’t deliver these roles to customers.
The SAP Fiori Launchpad relies on the authorizations in the ABAP back-end server and the authorizations to access the OData services in the SAP NetWeaver Gateway in the ABAP front-end server. Configuring these relies on Launchpad catalogs and UI PFCG roles.
A technical role is needed for the SAP Fiori Launchpad and SAP delivers a predefined set of technical objects for the SAP Fiori Launchpad, in particular the required profiles. For ‘own’ catalogs these and possibly other technical objects are also required. The UI PFCG defined roles bundle all front-end privileges required for execution of Fiori apps. Through user assignment to back-end roles, additional privileges are provided to execute the specific applications or access the OData services. The powerful ‘Fact sheets’ require authorization roles in the front end, which grant the users authorizations for the ICF services and business server pages (BSP) to support the fact sheets.
Simple? Not a a chance…
It could be that things have started to simplify, but I have some doubts, or rather, I am not optimistic – this all suggests – “more complex”. SImplifying things on the front end will mean more complexity on the back end.
I was further reminded of this, this week when discussing EHS/EHSM with a consultant. We were debating the merits of using BAPIs and rFM’s to create and maintain EHS data and considered the implications for the security authorization model. After-all, we wouldn’t be using transactions for the automations. In the end we concluded it might be ok. But the question ultimately, will be whether or not this stands up to audit scrutiny.
If you wonder what all the fuss could be about then you only have to trawl over to SAP SCN and read some of the many discussions on security and authorizations.
Conversely, if you’re the kind of person who likes to learn more through self study, then you might want to make a small investment in Andrea Cavalleri and Massimo Manara’s book from Galileo Press – 100 things you should know about Authorizations in SAP (ISBN: 9781592294060). For the ABAP developer a slightly older volume but indispensible guide is Authorizations in SAP Software: Design and Configuration by Volker Lehnert (ISBN:9781592293421). Both are available at SAP Press and through Amazon. These don’t contain anything on the Fiori model, for that you will need to look elsewhere like SAP Help.
Unfortunately the SAP Fiori Security Guide that was originally published as a PDF has been taken down from its original location, perhaps because it is a bit out of date so again, refer to SAP Help for more information.
My eldest son says that I am ‘very judgemental’ – I don’t really have a defence and though I do tend to complain a lot about stuff, the truth is that there is a lot of stuff out there to complain about!
You can guess then where this post is going. It’s another rant about bad programming and poor applied thinking to technology. I’ll concede that things have changed immensely over the years since SAP was first let loose on the unsuspecting business community but you would be surprised how much new bad code crops up.
Yesterday I was posed the challenge of trying build automation around a multi-headed gorgon in the form of the SAP transaction MI10. It’s one of those peculiar transactions that shouldn’t be used that much but which surprisingly probably gets used quite a lot given the fact that inventory sometimes gets damaged, lost or mysteriously grows legs and walks out of the warehouse.
On face value, the transaction is probably not altogether terrible but it is when you discover that certain complex scenarios make it super difficult to work with.
Getting physical with your inventory
To automate any kind of process for more efficient use you need to look to repeatable patterns to reduce the amount of logic that is needed to stream the data. In this instance, if you are making use of serialized inventory and something needs to be said, stated or changed in relation to that, then you need to abide by your systems’ accounting rules and provide the write downs and write offs with as much information as possible in order to avoid an audit exception note.
With tight margins, efforts to constrain inventory and free up working capital, the physical inventory process is key to ensuring that you have in stock what your system says you have in stock, and if you have variances – account for them.
MI10 can be used as an umbrella transaction for the equivalent activities of taking and recording physical inventory and making adjustments. SAP supports you being able to break this process into three tasks streams but also allows you to unify them in MI10. All well and good. At the time of making the postings to adjust inventory you should know what the physical quantity of items is that you wish to adjust for. In MI10 this quantity value then dictates how many fields need to be populated with data by rendering a screen with as many field spaces for serial numbers as there are items to adjust for.
Since would be a fine approach except that this doesn’t make use of an ALV grid in the ABAP program but instead presents up to 19 fixed fields which are used to check against existing serial numbers. The screen is pretty dumb. When you have more than 19 items to enter you also have to scroll down to get to more fields.
For a human entering the data this is fine, for the most part, but transcription errors are easy to introduce and manual entry is terribly inefficient.
It’s unfortunate that this is such a difficult transaction to work with and it is unfortunate that serialization was apparently tackled as an after-thought in the way that the program was enhanced.
When I tried to automate this process using an alternative method, say with a BAPI, I found one – BAPI_MATPHYSINV_POSTDIFF , but unfortunately it doesn’t support the important serialization aspect. Weird, since BAPI_MATPHYSINV_COUNT does….
So it’s disappointing to say the very least. Looking out on SCN it seems the only way around this is to once again get an ABAPer to write something, something to fix an inherent defect in the thinking around this important business process.
How do you ensure that your employees have just the right level of understanding and skills to meet the demands of the job but keep them motivated and assure data quality.
Michael Management (MMC) , a specialist SAP eLearning company revealed some interesting characteristics about how SAP users feel about their training in general in their most recent survey.
Almost half of the respondents (44%) indicated that they preferred eLearning self paced online training and 43% indicated that they hadn’t received sufficient training to perform their job responsibilities adequately. Almost a third indicated that they should have received at least 41 hours or more training in the last year to do their jobs properly and yet hadn’t received that. MMC also revealed that one of the reasons that the training doesn’t happen is that for more than a quarter of the respondents, there isn’t enough budget for training and another quarter indicated that there was nothing particularly useful or appropriate on offer. With only a quarter indicating that there isn’t enough time for training what are the choices left?
On the job training
On the job training seems to be the most common approach used to getting new employees up to speed to be able to do their jobs adequately but with this being heavily dependent on how well processes and procedures are documented as well as the relative effectiveness of the person giving the OJT, it should be little suprise that employees are frustrated or demoralized. Larger, more sophisticated organizations have comprehensive and extensive training programs blending a combination of job shadowing, classroom instruction and eLearning but it can be a challenge keeping this content current and contemporary.
One of the ways to minimize the dependence on training programmes to assure data quality and business process efficiency, is to automate as many processes as possible or to reduce processes to simple procedures that can be easily understood and leave little to interpretation or discretionary decision making. Businesses don’t want to necessarily employ automatons but at the same time they do want to ensure that employees cross all the T’s and dot all the I’s in the execution of their job tasks.
Another survey by ON24, a webcasting and virtual communications solutions company indicated that more than half of entry level employees need explicit skills training and that the best trained industry segment is the medical and pharmaceutical industry where half of respondents indicate that they receive good training. While almost half of all the respondents reported that training is a top priority in their organizations more than three quarters also feel that training helps them do their jobs better, improving company performance and enhancing personal career goals.
SAP implementation projects
Training as part of a new SAP implementation is pretty commonplace, it is usually an integral part of the project delivery and it gets incorporated into the overall budget of the implementation but long after the project has ended and the consultants have moved on, how does your organization ensure that employees know what they are doing and why they do it to ensure data quality and proper business transaction processing?
Many of the companies that I interact with, indicate that the appropriate use of SAP is not something that represents a core focus of training existing or new hires. What they do indicate is that data quality and operational data processing challenges continue to plague the business. The challenges are so great that they are scurrying to implement more robust approaches to data gathering and data maintenance in core systems of record like SAP.
By implementing increased oversight and governance to the end to end data management process, particularly for master data and master records objects, they are obviating the need to expend extraordinary amounts on expensive training and education programs for employees.
Your own army of SAP consultants
It’s my opinion that spending money on training employees how to use an infinitely configurable ERP solution like SAP is a bit of a double edged sword. When you educate and teach your employees about ERP capability through formal classroom education programs that have certification processes at the conclusion, you are grooming some of your potentially best employees in how to ultimately become SAP consultants. If you’re a consulting company or have ambitious other ERP projects then fine.
If you can retain them and leverage that learning for future projects then that is all well and good, but invariably, those very classroom settings, expensive as they are, don’t teach your employees much about the way that you run your business and why certain processes and procedures exist. You’re better off investing in some very targeted eLearning and using that to explain concepts and principles that are relevant to your business and how it uses SAP. At Winshuttle this is further augmented by periodic webinars and online training courses and webcast content.
The difference in what you save on appropriate eLearning programs versus expensive proprietary training courses in classrooms is better spent on deploying lean data management approaches to automating and guiding processes and procedures.
Mechanisms such as automations with templates and workflows assure data quality, process governance and ensure compliant data management processes are in place and adhered to.
There are many options available for mass data maintenance in SAP and while third party applications that connect over RFC are not necessarily the ideal method for mass actions in SAP, the other options tend to be either very technical or very limited.
LSMW is quite frankly a sorely abused tool that auditors complain vociferously about. IT has no business messing with data in productive systems and business users have no business having access to LSMW.
The mass transactions and even BAPIs on offer in SAP natively, even the newer and revised ones, lag behind the capabilities of the configured or customized transactions in ERP itself and often never get configured or modified to align with what the business needs so they simply never catch up or align.
It might be proposed that business consider BODI/BODS but then this quickly falls into the realm of heavy technical lifting and architecting and configuring a complex solution with a long tail of deployment before the business can actually get going with the solution. Besides, BODS/BODI are often exclusively tools in the domain of IT or technically proficient users. If in the interceding period between selection and implementation of such a solution, the business pivots or changes direction, how quickly do these solutions re-align with the requirements of the business? My experience and that of others, suggests that they take a long time and maintaining them comes at a tremendous technical and operational expense.
In the absence of something better, business is back to an agile solution that can quickly connect to the configured transactions in SAP ERP (like MM01/MM02) calling them over RFC and filling the screen fields in the background (BDC) or interactively (GUI Scripting). Performance and throughput hinges heavily on several factors but not least of these is the way the application is positioned and the resources committed for the task.
Proximity is everything
The closer the client is to the SAP applications, the faster the throughput (you’re then not dealing with network latency as a degrading factor). Also consider running these sessions against a batch server rather than just any old application server on the front end. If the call of the transaction over RFC process has to compete with regular dialog users in daily work then someone or something is going to have a bad performance experience, the batch processor is more tolerant but the wet ware (people) tend not to be…
The selected scripted method also has a tremendous impact – if you use GUI scripting then this the process is going to be dog slow as each screen gets painted and each field filled – this classic screen scraping should be avoided as much as possible.
While BDC over RFC is not as high performance as LSMW you reap benefits in other ways, the user who owns the data gets to load the data ( not usually an option with LSMW) and the errors (if any) show up in the excel workbook or access database in line with the records you are trying to change or create – using other methods you often have to convert the data to CSV or text and load a text or CSV file and then try to reconcile the items to the log – not pretty and also pretty painful.
Finally I will add that abuse of these third party tools is also often a challenge seen in the field, consider where they are positioned.
Transaction recorders are not recommended for the millions of records data sets, do they get used for them? Sure, but should they – probably not.
My experience is that once you hit about 30,000 rows of data, the question becomes one of, how long are you prepared to wait for this job to run? With throughput of around 30 fully loaded materials per minute using a single BDC session on a small ECC box can you afford almost 1,000 minutes (16 – 17h) of load time?
Do you have thoughts on better ways than these?
The SAP press engine wheeled out a big gun this month with the announcement of S/4 HANA – pitched as the successor to SAP ERP – Hasso Plattner and Bill McDermott both made statements about the importance of the announcement. While there still seem a great many unanswered questions about how S/4 will work and run, one thing is clear, SAP has been preparing for this announcement for some time. While the specifics may still be very murky we know a few very specific things.
The first fact that we know is that S/4 HANA will run on the SAP HANA platform and the primary user experience will be delivered through the Fiori application layer. For some companies this will be a welcome announcement as they toss coins to decide whether to adopt, stay with or abandon SAP as an ERP platform provider. There can be little doubt that for companies that have made a significant investment in SAP’s flagship ERP product, this announcement holds the promise of a new lease of life on SAP’s aging applications.
The HANA platform was the first horse to be presented from the new SAP stable around 2010 with the SAP Fiori UX being formally released a short time later and now boasting some 400+ role based apps SAP ‘solutions’ providing a personalized responsive and simple user experience.
Certainly customers that I speak to are all over the place in terms of their hopes and aspirations for HANA, some are great advocates, having already moved off of a dependency on one of the traditional RDMS for the underlying database but few or none talk about their own applications that they have developed on the HANA platform. Internally we have played around with the standard APIs and demonstrated an ability to directly write to the tables with but without clear use cases it seems a little premature to put too much more effort into this.
What does hold promise, is the announcement in that SAP S/4 HANA will be offered in the cloud, on-premise and as a hybrid deployment to provide maximum choice to customers. The SAP Simple Finance solution was the first SAP S/4 HANA solution to be offered and was announced in June 2014. While the specifics of the next generation solutions seems a little vague I did reach out to some of the more prominent advocates of HANA to determine just how exactly customers could transition to an S/4 HANA experience today even with only Simple Finance 1.0 on offer.
An important component that has to be considered is the SAP Landscape Transformation Replication Server, which you can learn more about through the Application Operations Guide targeted for customer consumption at help.sap.com.
SAP Landscape Transformation (SAP LT) Replication Server is technology that allows you to load and replicate data in real-time from ABAP or non-ABAP source systems to SAP HANA environments. LT Replication Server uses a trigger-based replication approach to pass data from the source system to the target system.
The Replication Server can be installed either as a separate SAP system, or if you have enough iron, on any ABAP source system.
The following graphic outlines the basic concept and the typical landscape (for an ABAP source system) using the trigger-based data replication approach of the SAP LT Replication Server.
In order to replicate data the Replication Server needs to be configured to load and replicate data from one source system to up to four target database schema of HANA systems. Data can by replicated either in real-time, or by schedule. When using trigger-based data replication table-based retrieval of data occurs from application tables in the source system (or source systems). Transformation rules can also be defined for the data replication process.
As you can see, the approach from the application tables to the Replication Engine is to integrate by way of RFC but between the Replication Server and a HANA target system the connection is achieved using a DB connection and so is likely to perform well.
Typically on ERP systems each time a connection is established, logon data and other system parameters, such as character code pages, are exchanged which causes a system load of approximately 2.5 – 3 KB on the network each time a user logs on to the SAP System, for example when you use a BDC session with Winshuttle Transaction to send data from Excel, the initial logon tends to take some time.
The largest amount of overhead for an RFC transmission occurs when making the connection (or calling a function like a transaction). The size of the data blocks is configurable but there is a whole art to choosing optimal block size depending on the payload that you intend to exchange over RFC – you often need to make a compromise between the size of the data block and the manageability of the data. SAP’s own tests have shown that for data of approximately 100 KB and greater, the overhead when making the connection or calling the function is negligible.
SAP says large amounts of data only produce additional improvements in performance when ‘very good data compression programs’ are used, one can only assume that the mechanisms used for the LT replication fall into this category.
Certainly returning to the considerations of finance, if your intent is to continue with the status quo of running your financials on ERP and you plan to do more analytics and reconciliation on an S/4 system today then you will need to set up replication between BSEG, BKPF, BSIS and BSAS and the S/4 system as a minimum. Certainly if you have big data in these tables the replication may not perform very well unless perhaps you are running on a HANA database.
I have been told that in Simple Finance 1.0 the standard Accounting interface BAPI’s are used but in 2.0 there are dedicated posting programs so for existing SAP customers, trying out S/4 today on Simple Finance 1.0 may not be as difficult as one thinks. You can read more about the possibilities for integrating ERP finance with S/4 by way of the blog post by Jens Krueger and you can read more on how to optimize RFC also at help.sap.com
Over the years I have gone to great pains to explain how businesses can save time and money with automation of business processes in systems of record like SAP but despite this, there are some who doubt that real time and money savings can be obtained through automation of things like transactions.
The power of preparation
The approach is actually quite straightforward, if you already have your data in a spreadsheet then the next question is how you get that data into the system.
Many respected experts, industry productivity gurus in particular, will argue that the data should never be in a spreadsheet and should never have found its way to a spreadsheet. This statement is made on that assumption that data processing tasks are all very hygienic and straightforward.
As any business user will tell you, placing orders, doing adjustments, mass changing data is anything but straightforward and a spreadsheet is often the most straight forward way of collating and compiling data before taking any action.
If you’re taking the output of the spreadsheet and either copying and pasting the data into entry screens or manually transcribing then consider how much time you are spending on that task? In fact, if you are even converting it into comma separated values for loading through some tool, consider the amount of time you spend on that, the frequency and relative reliability of that approach.
Injecting data into systems is a prescription for improved productivity
Every minute you spend on transcription, copy and paste, reformat, checking and rechecking, could be precious moments that you will never recover.
If you eliminate those steps, accelerate the data injection into the system of record and avoid the likelihood of duplicate entries, omissions and transcription faults then you will reap the reward of time savings.
Multiply the time saved by the value of your time and that’s a part of your ROI. You can test this out with any ROI calculator that allows you to define a given process.
The great thing about using a standardized method for interfacing with your system of record is that it can be used for small quantities and big quantities of data and depending on the nature of the tool that you use to create the automation, you may have many automation scenarios for different tasks that you need to perform.
While continuing to use spreadsheets to stage data may seem old fashioned to some users, the reality is that this is one of the most powerful and most flexible ways available for preparing data and while automation doesn’t necessarily save you time in the preparation stage it will save you time when you finally have to apply the prepared data to your system of record.
I recently attended a webinar by SAP and SuccessFactors that attempted to argue the merits of cloud solutions as primarily being able to meet and address the lowered sustainment costs associated with using SaaS by eliminating the opportunity for customers to customize their cloud application – what SAP mentor Chris Paine promotes is the idea of enhancing functionality through extensions – using vendor guaranteed APIs.
On the one hand it was argued that SaaS solutions aren’t customizable but are extensible but the specific nuances of the difference between these two concepts get very blurry very quickly.
As the speaker rightly pointed out, some organizations really do differentiate themselves from their competitors by virtue of the fact that they have unique business processes and unique business process solution requirements.
This opportunity of SaaS cloud solutions unfortunately does seem unattainable to those organizations that already have a solution that addresses their needs but for which they pay a heavy price not only in maintenance but also in terms of flexibility.
Creating safe connections
What SAP is now pushing therefore is the notion that you can augment your cloud based solutions by making use of the SAP HANA cloud platform. For those who have been tracking HANA for some time, this is all little or no surprise. Where the HANA seems to really differentiate itself or appears to be trying to differentiate itself is in terms of the methods that it uses to allow safe and secure integration between your SaaS solution and any applications you might develop as custom functionality in the cloud.
At Winshuttle I have seen similar challenges for years with clients at the desktop but it has also started to become a challenge at the web form level with low code solutions with an increasing number of companies wanting to enable collaboration with their business partners either as suppliers, vendors or joint ventures. The idea of cross collaboration between companies is nothing new, it of course helps to reduce business friction and accelerate the speed with which transaction processing can be completed but it comes with a cost.
Aside from the tax associated with customizing ERP there is an additional cost which is most often manifest in the need for the business to create an access path in the suit of armour that it presents to the outside world to facilitate collaboration.
Sending email outside of your business is one thing but exposing your ERP and file systems is something entirely different especially if you are considering doing something mobile.
Systems of record exposed to the world
Interface bus and message based systems get the business down this path very effectively; are made robust with encryption and password and user identification but tend to be inflexible and rigid. If the requirements change, the interface needs to be changed and the message structure changed and this takes time. Backwards and forwards compatibility may also represent a challenge, especially if you are dealing with many business partners.
While the new promise of the HANA Cloud platform will not solve all the difficulties and resolve all the anxieties of companies pushing to open systems and the cloud, it seems to represent a good option for companies that are committed to a heavy investment in redeveloping existing custom solutions or embarking on a new way of thinking how they use cloud applications and extend them with unique business differentiating features.
For those watching the budget it seems that the pricing model is heavily dependent on the physical number of extensions that a company decides it needs. Extension packages are offered as Standard, Enterprise and Supplemental editions based on the size of the usage profile per named user. It is conceivable that the cost can become significant therefore, when you have a large number of users.
I have been going through an exercise with colleagues evaluating stories around Business Process Automation solutions and where they fit into the classic data movement challenge with SAP. For some years now we have taken the position that there are three areas of interaction with ERP systems like SAP. At the lowest level there is the grunt work associated with manual entry.
You take data from a form or a phone call, enter the ERP data entry screen and enter the leave request, material definition, the purchase requisition, the sales order or the manual journal. This approach to manual entry works perfectly well for probably up to a dozen or so entries of low to moderate complexity.
The manual entry process gets a little hairier when the entry requires multiple levels of data entry and goes beyond a few simple entry screens and in some instances we find that companies preferred a template based approach even on single records when the number of fields go beyond about twenty to thirty. The rationale for using a template even on a single record is all about the data quality issue.
By using a template they are able to positively influence the data quality metric by being able to define certain characteristics as mandatory or bound by certain rules even when the SAP system itself doesn’t have rules around certain fields.
This is a classic business process automation sweet-spot for Business User Application Development. As is inevitable in such circumstances, the question that then arises, is what does one do with exceedingly large volumes of data or terribly complex scenarios where a template based approach is simply not good enough.
What we have noticed is that for relatively stable end-to-end processes where say purchase orders, sales orders or similar activities occur with particular regularity in high volumes, that a more IT delivered system-based approach to data capture is preferable.
Minimizing human interaction
Often there is a requirement for a solution where as little human interaction as possible, is preferred. Examples of this might be the integration of a Warehouse Management System, a shopping cart solution or a point-of-sale system.
Where these are in use, the application of IDOCS, EDI or something like PI/XI or an integration server are by far the most preferred approach. The reason for this is largely about scale but sometimes it is also about the asynchronous way in which many of these kinds of solutions acceptably work.
The challenge then is in addressing that mid-range of data movement challenges, the one where you have enough data to create or maintain that manual entry is awkward or a nuisance or doesn’t justify the building of a formal IT ‘big solution’.
Part of the issue with this area of ERP use is that IT is nowadays expected to do much more with less and as a consequence their budgets are not being allocated to the smaller business process improvement initiatives such as keystroke minimization, UI simplification or transcription avoidance. If the business wants these attended to then the business needs to find the funding of such initiatives as a project from operational budgets.
Where IT will step into the fray with a large solution for the business is on extending accessibility to the ERP platform via mobile devices. The thinking behind this, is more broad brush than specifically addressing individual use cases or providing point solutions.
Solutions like SAP Personas and SAP Fiori speak to making the functionality and data within SAP ERP accessible to a larger community in a more digestible way. Mobile solutions in particular, are an interesting area for business process improvement not only because they provide access but also because they accelerate the end-to-end process by enabling participants in an end-to-end process to be more responsive but doesn’t address the challenge of working with a number of records simultaneously.
Additionally, mobile isn’t very useful for dealing with large quantities of discrete data entry even when dealing only with a single record. While I could see a sales person for example entering a one line sales order into a mobile a device the idea that they might enter a fifty line sales order seems a little less likely.
If this challenge with using a mobile device is well understood for a sales order then extending the challenge further to the creation of a material master that may have as many as a 100+ fields the use of a mobile device to do this, seems even less acceptable.
When considering mobile solutions it should be clear then that while a mobile application does often bring improved usability, the challenge that the business has, may not be only related to usability it may also be related to data volumes and a mobile solution may not adequately address that challenge.