It has been a week of activism and a call for action, with the Extinction Rebellion, causing major disruptions in London.
But alongside grass roots demonstrations, there appears to be a greater awareness in the corporate world, that every business has a role to play in tackling the catastrophic environmental risk the world now faces.
On Monday, Legal and General Investment Management (LGIM), the UK’s largest institutional investment firm, announced that as part of its Climate Impact Pledge, it would not hold eight large global companies in the Future World funds. LGIM stated: “Where such companies are seen to take insufficient actions on climate risks, LGIM will also vote against the chairs of their boards, across the entire equity holdings.”
On Thursday, Bank of England governor, Mark Carney published an open letter describing the findings of a new report in which global banking, central banks, supervisors and the financial community have signed up to a set of deliverable goals to help to ensure a smooth transition to a low-carbon economy.
From a banking perspective, the focus is to build knowledge and share data such that the monitoring of climate-related financial risks are integrated in day-to-day supervisory work.
The promise of a “tech solution”
As Computer Weekly recently reported, PwC and Microsoft believe technology has an important role to play in helping businesses and governments to support climate change and environmental issues. PwC’s new How AI can enable a Sustainable Future report, looks at the use of AI-enabled smart technology across agriculture, water, energy and transport.
Examples include AI-infused clean distributed energy grids, precision agriculture, sustainable supply chains, environmental monitoring and enforcement, and enhanced weather and disaster prediction and response.
In the sectors it focused on, PwC estimated that AI applications could reduce global greenhouse gases by 4% – more than the current annual emissions of Australia, Canada and Japan combined.
The experts are in agreement that climate change will not only ruin the planet, kill off the polar bears, it will also have a major impact on global business. It may not have the immediate impact of the Extinction Rebellion, but the words from Mark Carney, the LGIM statement and PwC’s findings may actually resonate more with business leaders. And if PwC’s forecasts are accurate, AI has the potential to enable every country to meet the target to become carbon neutral by 2050, as set out by the Paris Agreement.
There was a time when the major ERP providers were considered allies of the CIO. They were trusted advisors. From an IT decision-maker perspective, ERP software aimed to encapsulate best-in-class business processes in a single package.
In theory, different enterprise packages from the same ERP company would be able share data; the packages would be pre-integrated and, as for master data management, the customer would have a single version of the truth.
The technique of bundling “free” or discounted enterprise software as part of a sales pitch, meant that organisations were often enticed to buy more products from the same ERP company. On paper, at least, it made sense to invest in a single supplier for core enterprise systems.
Drawbacks of having a single source for ERP
But there are many drawbacks. Organisations standardising on a single ERP provider’s software stack are often relentlessly pursued by account teams at these companies to buy more and more stuff.
These days, that stuff tends to be cloud offerings.
As Computer Weekly has previously reported, Oracle executives referred to “cloud services“ six times in the transcript of the 45-minute, third-quarter 2019 earnings call in March, posted on the Seeking Alpha financial blogging site. Similarly, SAP executives made three long statements regarding “as-a-service” in their fourth-quarter 2018 earnings call in January 2019, according to the transcript on Seeking Alpha.
SaaS increases ERP choice
KBV research’s Global Software as a Service (SaaS) Market, report has forecast that the SaaS market is expected to attain a market size of $185.8 billion by 2024, growing over 21% a year. The ERP providers have spent the last few years fleshing out their SaaS strategies through strategic acquisitions and building out cloud-based ERP software. However, they are no longer the only options available to the CIO. While ERP tends to be able to run a large chunk of a company’s business processes there may be gaps and missing functionality. In the past, so-called “point solutions” filled these gaps. In the era of SaaS, the cloud has enabled companies like Salesforce and WorkDay to establish themselves as dominant players. Often, these SaaS products are best in class.
Multi-sourcing is the future of ERP
It used to be the case that the bulk of enterprise software spending usually went to a single ERP provider. Today, savvy IT decision makers are building out enterprise SaaS portfolios, with products and services from multiple SaaS providers.
Such a strategy breaks the grip the traditional ERP providers have had with their customers. The traditional ERP companies are set up to offer the CIO a one-stop-shop for enterprise software. But businesses no longer standardise solely on a SAP or Oracle suite of products to meet their enterprise software requirements. As a consequence, the traditional ERP providers have gone shopping for smaller SaaS firms, in an attempt to sell what they would consider “a complete solution”, that they can then claim meets all the requirements a customer needs from enterprise software.
Such is the nature of innovation in the software industry that someone is bound to invent something new and original, which subsequently gains traction. Clearly, it is not going to be particularly realistic for the major ERP providers to buy every SaaS business that has an offering which fills a gap in their product portfolios.
Make integration a key requirement
When shopping for enterprise software, Forrester principal analyst, Duncan Jones, believes IT buyers need to put integration with open APIs, high on their list of priorities. Businesses are told to reduce their reliance on custom code in ERP implementation. This should also apply to the customisations required to integrate the ERP with a third-party product.
If open APIs are made available, third party SaaS companies can create pre-integrated products that fill in the gaps in functionality that exists in the product portfolios from the traditional ERP providers. Assuming the enterprise SaaS landscape becomes more and more fragmented, IT buyers should expect enterprise software companies to provide greater and greater support for integration with third party SaaS products. For the traditional ERP providers, this is likely to be both cost effective and strategically more sustainable long-term, than attempting to acquire every SaaS startup that has an interesting product.
Custom hardware is usually the only option available to organisations that need to achieve the ultimate level of performance for AI applications. But Nvidia has taken massive strides in flipping the unique selling point of its graphics processor units from the ultimate 2D and 3D rendering demanded by hard core gamers to the world of accelerated machine learning.
While it has been late to the game, Intel, has quickly built out a set of technologies from field programmable gate arrays (FPGA) to processor cores optimised for machine learning.
For the ultimate level of performance, creating a custom application specific integrated circuit (Asic), means that the microelectronics can be engineered to perform a given task with the least amount of latency.
Custom approach: Tensor processing unit
Google has been pioneering this approach for a number of years, using a custom chip called a TPU, as the basis for accelerating its Tensorflow open source machine learning platform.
Its TPU hardware tops the MLPerf v0.5 machine learning benchmarks of December 2018.
Beyond Asics, IBM is now investigating how, in certain, very specific application areas, quantum computing could be applied to accelerate supervised machine learning. It is actively looking to crowd source research that can identify which datasets are well suited to quantum computing accelerated machine learning.
Another option is the FPGA. Since it can be reprogrammed, an FPGA offers a cheaper alternative to an Asic. This is the reason why Microsoft is looking at using FPGAs in its Brainwave initiative for accelerating machine learning on the cloud.
GPUs rule mainstream ML
Nvidia has carved a niche for more mainstream AI acceleration using its GPU chips. According to a transcript of its Q4 2019 earnings call posted on the Seeking Alpha financial blogging site, the company believes deep learning offers a massive growth opportunity.
Nvidia CFO, Colette Kress, said that while deep learning and inference currently drives less than 10% of the company’s datacentre business, it represents a significant expansion of its addressable market opportunity going forward.
In a recent whitepaper, describing the benefits of GPUs, Nvidia stated that neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. “GPUs have thousands of processing cores optimized for matrix math operations, providing tens to hundreds of TFLOPS of performance. GPUs are the obvious computing platform for deep neural network-based artificial intelligence and machine learning applications,” it claimed.
Opimising x86 CPUs
Intel’s chips are CPUs, optimsied for general purpose computing. However the company has begun to expand its Xeon processor with DL Boost (deep learning) capabilities. Intel claims this has been designed to optimises frameworks like TensorFlow, PyTorch, Caffe, MXNet and Paddle Paddle.
It hopes organisations will choose its CPUs over GPUs, because they generally fit in with what businesses already have. For instance, Siemens Heathineers, which is a pioneer in the use of AI for medical applications, decided to build its AI system around Intel technology, rather than GPUs. The healthcare technology provider stated: “Accelerators such as GPUs are often considered for AI workloads, but may add system and operational costs and complexity and prevent backward compatibility. Most systems deployed by Siemens Healthineers are already powered by Intel CPUs.” the company aims to use its existing Intel CPU-based infrastructure to run AI inference workloads.
So it seems developments in hardware is becoming increasingly important. Web giants and the leading tech firms are investing heavily in AI acceleration hardware. At the recent T3CH conference in Madrid, Gustavo Alonso, systems group at the Department of Computer Science ETH Zürich, noted that AI and ML learning are expensive! “Training large models can cost hundreds of thousands of dollars per model. Access to specialised hardware and the ability to use it will be a competitive advantage,” he said in his presentation.
There are no lessons that can be gleaned from the tragic loss of life following the Ethiopian Airlines Flight 302 crash on March 10, 2019. As has been reported across the web, the crash bears remarkable similarities to Indonesia’s Lion Air Crash of October 29, 2018. Both involved Boeing 737 MAX aircraft. To quote from a statement made by Ethiopian Airlines’ group CEO, Tewolde GebreMariam,: “Until we have answers, putting one more life at risk is too much.”
What is known today is that the crash appears to be a side-effect of a software system known as the Maneuvering Characteristics Augmentation System (MCAS). Boeing says MCAS has been designed and certified for the 737 MAX to enhance the pitch stability of the airplane. Across the web there have been reports of how the system got confused during take-off, forcing the nose down to prevent the aircraft from stalling. The plane continued to dive, despite efforts by the pilots to try to regain control of the aircraft. Reporting the preliminary findings of the investigation into the Ethiopian Airlines Flight 302 crash, the Wall Street Journal noted that a suspect flight-control feature automatically activated before the plane nose-dived into the ground.
Technically speaking, MCAS is a stall prevention system. According to CNBC, since the crashes of the two 737 Max planes, Boeing has faced fierce criticism for not doing more to tell flight crews about the stall prevention system or alert them when the technology kicks in. It reported that only one Angle of Attack (AOA) sensor for MCAS was fitted as standard. airlines were asked to for additional payment to have a second AOA installed.
Earlier this week Boeing issued a software update. According to Boeing this update has been put through hundreds of hours of analysis, laboratory testing, verification in a simulator and two test flights, including an in-flight certification test with Federal Aviation Administration (FAA) representatives on board as observers.
It said the flight control system will now compare inputs from both AOA sensors. “If the sensors disagree by 5.5 degrees or more with the flaps retracted, MCAS will not activate. An indicator on the flight deck display will alert the pilots.”
Balancing Safety critical automation with human operators
What is clear from these reports is the complex technical and ethical issues that must be addressed in developing safety critical augmented systems that need to coexist with highly trained individuals. Neither entrusting everything to the computer system nor deferring every decision to a human, are the right approach. While the FAA investigation is likely to conclude that the Ethiopian Airlines Flight 302 crash was down to software, could a tragedy like the Germanwings Flight 9525 crash on 24 March 2015 have been avoided if the flight control software actively prevented the co-pilot from flying the aircraft into the Alps?
Budgets in IT general do not grow at a rate that can sustain stellar financial performance across the IT industry.
However, Gartner’s latest spending forecast has reported that worldwide software spending projected to grow 8.5% in 2019. It will grow another 8.2% in 2020 to total $466 billion. According to Gartner, organisations are expected to increase spending on enterprise application software in 2019, with more of the budget shifting to software as a service (SaaS).
Among the IT firms hoping to capitalise on this shift is SAP, with S/4Hana, the company’s updated ERP system that runs off an in-memory database.
Those organisations still running SAP’s older ERP Central Component (ECC) system are only guaranteed support until 2025; after this data, SAP has not made a firm commitment to carry on support.
Does SAP see Hana as a cash cow?
Looking at the transcript of SAP’s Q4 2018 earnings call, posted on the Seeking Alpha financial blogging site at the end of January 2019, “Hana” is referenced 29 times. “It’s a Hana world,” CEO Bill McDermott, proclaimed. When asked about the company’s plans for growth, McDermott claimed that Hana is the ultimate platform for a modern enterprise. “You think about what we can do with Hana as database-as-a-service, you think about Leonardo with predictive AI and deep machine learning and IoT, we’re going to double down on these thing.”
Putting it more bluntly, the SAP CFO, Luka Mucic, said: “The S/4Hana upgrade cycle drives potential for substantial further renovation of a company’s IT architecture and gives us multiple cross-selling opportunities.”
However, as Computer Weekly recently reported, a new study from Resulting IT, questioned whether IT decision makers can build a compelling business case to do the upgrade from ECC. Computer Weekly spoke to former Gartner analyst Derek Prior, who co-authored the report.
Prior argued that while SAP S/4Hana has lots of nice stuff, he said: “It is different to ECC. You spent decades bedding in ECC, then it all changes with S/4 Hana, which is quite different.”
Complexity and risk of an ERP migration
S/4Hana is not simply an upgrade to the latest version of SAP. It is entirely different, and has new functionality that may not map easily onto how the business currently operates with ECC. The majority of people who took part in the Resulting IT study are most likely to choose a brownfield deployment than redevelop everything they have built into ECC from scratch. Resulting IT believes that with an estimated 42,000 SAP customers on ECC, upgrading to S/4Hana is going to cost £100bn globally in terms of IT consulting.
This translates into a huge expenditure for organisations. In the case of a brownfield deployment, the potential benefits of upgrading to S/4Hana are minimal – organisations will essentially spend a lot of money and may well experience major disruptions by embarking on a new ERP project that effectively delivers more-or-less what they already run on ECC.
Support beyond 2025
Clearly SAP wants people to buy into S/4Hana, and may well incentivise organisations to purchase it. However, organisations should not be held to ransom by their IT supplier. There is no good reason to spend money unless there is a compelling business case. A vague reference to digital transformation and AI-empowered business processes sounds good on paper. However, organisations will need to construct a watertight business case for upgrading from ECC to S/4Hana.
Just as in legacy banking systems, by using a third party for support, ECC customers can still take advantage of emerging technology and undergo their digital transformations. If a stable ECC can be maintained and supported, organisations can then focus on adding functionality around this, without the need to migrate to S/4Hana. New products and services can be developed entirely separate to the existing ERP, while still keeping ECC, supported by a third party, as the system of record. The core ERP metamorphosizes into a legacy system.
As SAP’s recent Q4 2018 results show, the IT industry relies almost entirely on organisations upgrading. Often these upgrades either add little value, or, as in the case of S/4Hana, customers end up not using the new functionality. So, given that IT budgets are tight, upgrade if there is a compelling. If there isn’t, now is the time to add SAP ECC to the legacy IT estate.
For the last three decades since its introduction, the web has informed, educated and entertained society. It has given musicians, artists and businesses of any size connectivity to a global audience. Anyone with something to say, can express themselves and publish on the web.
Sir Tim Berners-Lee could not have anticipated that his invention would have such a profound impact on society. Who would have thought in 1989 that with just a few taps on a touchscreen-enabled device or the click of a mouse, a global connected web would make it possible for someone to stream music and movies; transfer money and pay for goods instantly;order a pizza; book a foreign holiday and arrange a taxi pickup.
Online replaces high street shopping
Blockbusters, Maplins and many high street retailers have failed to capitalise on the opportunities the web offers. Instead, the likes of Spotify, Netflix, and the behemoth, Amazon, are increasingly taking a bigger and bigger share of people’s wallets. Perhaps Maplin’s next chapter, as an eclectic online bazaar for all things tech and electronic, may turnaround the business.
In the UK, department stores like Debenhams and House of Frasier have failed to stem the decline in sales. People can buy things far easier online than trying to track down something they really want to purchase in the high street. John Lewis, a company renowned for its peerless customer service, is another department store coming under the spotlight. In its financial statement, the retailer attributed the poorer than expected results partially down to Increased IT costs. “Over the last few years we have steadily increased IT investment to set ourselves up for the future. A number of those significant new systems are now operational resulting in incremental maintenance, support and depreciation costs,” the company stated.
This shows that John Lewis Partnership is looking at investing in the future. The only way it can address the online threat is to invest heavily in IT. Similarly, the recent princely sum of £750m Marks & Spencers has paid for half of Ocado, shows that investing in technology is the only sure way to keep up with the likes of Amazon, especially since the e-commerce giant acquired Whole Food in 2017 for a whopping $13.7bn – 18 times as much as what Ocado is receiving from M&S. The acquisition of Whole Foods has put Amazon in direct competition with the likes of Waitrose (part of the John Lewis Partnership) and M&S, which may be the reason behind M&S’ Ocado tie-up.
In 1994, when it was set up, Amazon was just an online bookstore. It quickly killed off Waterstones in the UK, and later music stores began to see sales plummet. Remember Tower Records, Virgin Music? HMV is struggling to remain relevant.
Connectivity creates business opportunities
Thanks to its global reach, the web has enabled companies to connect to one another, creating complex business ecosystems, where organisations can find a niche to add value. Ocado, in fact, could be regarded as Warehouse as a Service business – providing distribution and online deliveries for Waitrose, Asda and through its new business venture, M&S.
Even Royal Mail is not immune. It has finally come round to the idea that there is a business in delivering people’s Amazon purchases. It even handles Amazon returns, without the need for the customer to print out a return label. Numerous newsagents and dry cleaners are official drop-off and click and collect partners for online stores. Argos’ click and collect and drop of service for eBay buyers and sellers shows that the high street can adapt.
Changing trade connectivity
The winners on the web will be the organisations that have agile business models, that can adapt quickly to new opportunities.One can imagine that a hotel group like Hilton would never have contemplated that its business could be disrupted by a web service that owned no hotels – but this is exactly what AirBnB has done. Now thanks to the web, Alibaba can offer a global trading hub, connecting Chinese manufacturing directly to anyone who needs something made. Thanks to the global reach of the web, anyone who feels they can spot a product with potential and is prepared to take a punt can connect with supplies based anywhere in the world and become a distributor.
While bricks and mortar businesses have needed to comply with local laws, pay business rates and need to invest in buildings and hire tax-paying staff, online businesses have used global web connectivity to to flaunt local regulations, get around employee law by not having permanent staff, and relocate their head office in tax havens.
This has meant that traditional firms are at a disadvantage and today’s web appears to be owned by a few, mega businesses.
As the web turns 30, perhaps now is the time to sit back and evaluate how best to curb some of its excesses.
The House of Lords, Select Committee on Communications’ Regulating in a Digital World paper, published on March 9, warns: “The digital world has become dominated by a small number of very large companies. These companies enjoy a substantial advantage, operating with an unprecedented knowledge of users and other businesses. Without intervention the largest tech companies are likely to gain more control”.
While the internet existed way before the Worldwide Web (WWW), the web changed everything.
Its success has as much to do with the simplicity of using an HTTP web browser, as the fact that it was put into the public domain and the timing of its invention.
A lesson from the past
During the 1980s the TCP/IP protocol evolved to the point where basic command line tools could be used by Unix users and admins to share documents between networked computers.
The internet was predominately used in academia. In the commercial space, proprietary email services offered walled gardens, only available to subscribers. But in 1988, thanks to Vint Cerf, the internet was opened to commercial email services. This opened up the internet to everyone, and laid the foundations for a global communications network connecting HTTPD web servers to users’ HTTP web browsers.
In his March 1989 Information Management: A Proposal paper, Sir Tim Berners-Lee describes the original premise for the WWW as an approach to enable the people at Cern to share documents easily.
While it started at Cern, within three years, the WWW and the HTTP protocol was in the public domain. Then it took off.
No one owns the web
There have been plenty of attempts to make the web proprietary, but in its purest form, the WWW has remained free and open. However, the web represents many more things today compared to 30 years ago. It is the basis of social media platforms, music and video subscription services and global online shopping centres. Every business wants to own their customer’s web experience. But this is not why the web has been so successful.
Last year, Berners-Lee published an open letter in which he explains why the web needs to be more open, rather than users’ experiences being defined by the web giants. He argued that just like a software product, the web itself can be refined, and the “bugs” ironed out.
Just as CompuServe and Aol had walled gardens before the web, now the likes of Amazon,Facebook and Netflix often represent people’s primary experience of the web. They are not public domain. And while they may be built on open source, they are commercial services, effectively closed off from a WWW that offers free access to all. Three decades on since its invention, now is the time for society to consider how the web should evolve and what role commercial exploitation of the web should play.
The Irish Data Protection Commission’s (DPC) annual report makes interesting reading, given that the World Wide Web is celebrating its 30th birthday this month.
People regularly give away vast amounts of personal data through social media and instant messaging platforms like Facebook, Instagram, WhatsApp and Twitter.
These web giants need to comply with GDPR. But in its annual report, the DPC said it has 15 statutory inquiries open in relation to multinational technology companies compliance with GDPR.
The firms investigated are: Apple, Facebook, Instagram, LinkedIn, Twitter and WhatsApp.
As for Facebook, the DPC said it is conducting several investigations on the social media platform’s compliance with GDPR. In relation to a token breach that occurred in September 2018, the DPC said it was looking at whether Facebook Ireland has discharged its GDPR obligations to implement organisational and technical measures to secure and safeguard the personal data of its users. The DPC said it was also looking into at Facebook GDPR’s breach notification obligations.
Facebook LinkedIn, WhatsApp and Twitter are all being investigated in relation to how they process personal data.
GDPR and advanced analytics
The wording in the annual report concerning the LinkedIn inquiry is particularly intriguing. In the report the DPC states it is: “Examining whether LinkedIn has discharged its GDPR obligations in respect of the lawful basis on which it relies to process personal data in the context of behavioural analysis and targeted advertising on its platform.”
The fact that the DPC is looking at LinkedIn’s use of behavioural analysis is certainly very interesting. The web giants rely on understanding their users better than they know themselves. This level of AI-enabled advanced analytics and machine learning is now available to more and more organisations, not just the multinational tech companies the DPC is investigating.
The outputs from the DPC’s investigations will very likely influence heavily the way organisations use advanced analytics on web data that can identify individuals.
Ultimately, it may even influence how the WWW evolves and whether today’s web giants as well as those in the making, will be able to sustain business models that see them through for the next 30 years.
Next month will be the 30th anniversary of the world wide web. In 1989, who would have thought the web would touch every aspect of people’s lives – not only in a good way but also in ways that seem to undermine the fabric of society? An elegant way for researchers across the globe to collaborate, has evolved from a platform for free speech into a swamp seeping disinformation, hate, paedophilia and online bullying.
For instance, the BBC’s Countryfile, which was broadcast on Sunday 17th February reported on how illegal gambling rings live stream blood sports like hare coursing and cock fighting over Facebook and YouTube. And on today’s web, it seems open debate and fair comment can lead to a tirade of abuse targeted at anyone who appears to have a different opinion.
People must understand they are being nudged
The DCMS’ Fake News and Disinformation report discusses at length how easy it is for organisations to target social media users en masse in the same way online marketing campaigns are used to sell and recommend products.
The techniques have become increasingly more sophisticated. Behavioural economics uses so-called “nudge” technology, to try to influence people. It seems the ability to target individuals online through the use of carefully crafted online advertising campaigns with subliminal messaging is moving beyond the big marketeers and state agencies with a subversive agenda. Now, a service called TheSpinner claims: “TheSpinner enables you to subconsciously influence a specific person, by controlling the content on the websites he or she usually visits.” Sold as a service starting at just $29.00, anyone can sign up and target another individual, such as in the run up to a marriage proposal, by ensuring their special person sees a series of 10 related articles when they are online. “People need to be aware this technology can be used,” warns Bridget Kenyon, global chief information security officer at Thales, mirroring one of the findings in the DCMS report.
Digital literacy is key
Facebook, YouTube, et al, take the original premise of the web and democratise information sharing to the point that anybody can post an update, image or video anywhere and at any time. Anyone can receive this post, no matter how irrelevant or inappropriate it is. However as the DCMS recommends: “Digital literacy should be a fourth pillar of education. People need to be resilient about their relationship with such sites, particular around what they read and what they write.”
The term “digital” is only referenced twice, in the 87 minutes of GE’s Q4 2018 earnings call that took place at the end of January. In a transcript of the call, posted on the Seeking Alpha financial blogging site, none of the financial analysts who participated, asked about the company’s digital strategy in spite of GE announcing a new $1.3bn digital business on December 13 2018. Their main concern was the company’s Power business, which is referenced 62 times in the transcript.
GE was among the industrial giants that showed huge potential in evolving from a company that makes big machines to one that sells software-powered services. It was, but where is it now? Last December, Bill Ruh, the GE chief who led this strategy, left the company.
Computer Weekly first started reporting on the GE story in 2013, when Ruh was vice-president for software at GE Research. At the time Ruh talked about how industrial IoT would enable GE to predict machine failures, and power a service-led business. Ruh eventually headed up the company’s digital division GE Digital.
GE joined the ranks of traditional organisations pioneering platform businesses. Its former CEO, Jeff Immelt, was regarded as a digital visionary. Under his watch, GE became a 150-year-old startup. In a 2015 McKinsey article he wrote: “We want to treat analytics like it’s as core to the company over the next 20 years as material science has been over the past 50 years.”
Looking at the company’s Q1 2017 earnings statement, released just a few months before Immelt stepped down, he stated: “GE is continuing its portfolio transformation and investing in innovations in GE Digital and GE Additive.” The term “digital” appears five times in this statement.
Immelt’s tenure as the CEO of GE was plagued with problems in the company’s power division, something that remains a big issue.
In June 2017, he was replaced by John Flannery, who has subsequently been replaced by Larry Culp. Winding the clock forward to Q4 2018, and in the latest earnings release on Seeking Alpha, “digital” is referenced only twice, to announce that GE Digital will be spun off as a separate company.
GE also announced an agreement to sell a majority stake in field management services firm ServiceMax, a company it acquired in 2017. As Computer Weekly reported at the time of the acquisition, ServiceMax was among the key components in GE’s digital strategy. When combined with the concept of running digital twins of customers’ machines, Ruh believed GE could move beyond simply predicting machine failures. Instead, he said GE would be able to deliver business outcomes to its customers, such as higher production yield.
Digital version 2.0
Analyst Forrester believes among the positives to come out of the demise of GE Digital version 1, is that a new version 2 business will be able to operate as a separate company. “GE Digital can focus on developing as a software business and not an internal IT shop for GE industrial units,” Forrester noted.
However, all along, the challenges GE has faced is that it is not being recognised for its software business. Immelt acknowledged this challenge in a 2017 Harvard Business Review article. “It will take years for GE to fully reap the benefits of the transformations,” he wrote.
When Computer Weekly met Ruh in July 2018, he described how Saudi company Obeikan was building food and beverage applications on top of the GE Digital Predix platform, helping GE expand the reach of its software platform. Now that Ruh has left GE, what will GE Digital version 2, evolve into?
Growing a software arm is hard. Looking at GE’s progress to date, there are wider lessons that can be gleaned. Every business that wants to compete effectively with agile startups and the web platform giants will need to go through challenging times before their transformation is complete.Those who start the journey may not be the ones who complete it.
Incidentally, “Additive”, which Immelt highlighted in his 2017 trading statement, refers to using 3D printing in manufacturing. And yes, GE does have an additive manufacturing division. But it is not mentioned a single time in the transcript of the Q4 2018 earnings call, posted on Seeking Alpha.