The tech industry loves big tech events and for the host cities, these huge events represent a massive boost to the local economy.
Thousands upon thousands of customers, sales people, and industry executives travel to glamorous cities around the world. They endure long flights, dehumanising passport and immigration control and extortionate airport taxi fares, all because they want to sit in dark conference rooms and breakout sessions, where they drink too much bad coffee and eat too many unhealthy snacks. They attend to listen to the best brains in the business talk about the latest thinking and innovations, to network, and maybe, get to see a megaband perform a very lucrative gig.
But with the risk of coronavirus spreading, next week’s Mobile World Congress has been cancelled.
There are also reports across the web that Facebook and Intel have canceled events and on Friday 14 February, IBM pulled out of the annual RSA security event as a platinum sponsor.
Connectivity without physical travel
For an industry that preaches the information revolution and the freedom it gives individuals to be able to remain connected wherever and whenever they need to, it seems that IT industry executives spend their entire working life travelling from country to country.
One HR manager told Computer Weekly she hadn’t been home for “months”. Another tech head said he was off to Edinburgh after a meeting with clients in London, and then heading to Sydney. According to figures from the Committee for Climate Change, the average UK household generates around 2.2 tonnes of CO2 for heating based on data from 2014. Data from the Carbon footprint calculator (https://calculator.carbonfootprint.com/calculator.aspx?lang=en-GB&tab=3) shows that a one-way ticket from London to Sydney would produce about 2.55 tonnes of CO2. Is it absolutely necessary to travel to a conference or client meeting, or can unified communications and collaboration tech achieve the same results quicker more conveniently, with less health risk and lower environmental impact,.
When it met at the end of January, the World Health Organisation stated that it is possible to interrupt virus spread, “provided that countries put in place strong measures to detect disease early, isolate and treat cases, trace contacts, and promote social distancing measures.”
Visiting any conference can potentially increase the risk of the virus spreading. But beyond the current health scare, it is time for the IT industry to assess the value of such events, the value of flying people for client visits, and the environment impact of these things.
This is a guest blog post from Bola Rotibi, Research Director, Software Development, CCS Insight, looking at disaster recovery in cloud services.
The comfort blanket of the cloud can make us somewhat blasé with the common practice of protection through an integrated cloud storage service providing anywhere access.
Yet, even with this, we and organisations have experienced data loss that has deviated workflows, caused frustration, impacted productivity and have been financially detrimental.
When it comes to data and information that is valued in any capacity, protecting against loss becomes paramount. Putting in place disaster recovery is a time-honoured practice and necessary requirement for doing business in the modern world. Internationally applied regulations ensure sensitive customer data is kept safe, secure, private and traceable at all times to prevent malicious, erroneous and non-consensual use by third parties. The fines for non-compliance can seriously hurt.
However, you don’t have to investigate too hard to find how even with disaster recovery plans in place, too many organisations are in danger of losing large amounts of data has significant financial implications. Organisations are being caught out through lack of regular testing and necessary plans. Worryingly still, some smaller organisations don’t feel they have resources to match disaster recovery investment and planning made by larger organisations.
Many organisations falsely assume that data and information stored in cloud applications and services is safe from loss. However, without a plan that actively addresses protecting critical data stored in the cloud through Software-as-a-Service (SaaS) solutions in operations, that comfort blanket could just as easily smother an organisation when the light gets turned off.
Hyper scale cloud platform providers having security, redundancy and recovery measures in place that make it very unlikely for them to lose your data, but they are not infallible.
That said, cloud disaster recovery has come a long way with respect to the tools and services that are now available that significantly ease the support of backing up and restoring data held within cloud solutions. A number of providers now actively work to give flexible levels of backup and restore control to many of the widely adopted SaaS and integrated cloud storage solutions in the market. Cloud levels the field for smaller organisations to implement disaster recovery plans comparable to those implemented by larger firms. The ability for cloud to remotely store valued data and information so it can be recovered faster through a variety of access points and devices, makes it a strong platform for data replication and failover services.
A cloud disaster recovery plan is a necessary addition to wider disaster recovery, not least because it brings a level of versatility that is open to the broadest range of requirements, individuals and organisations. But what it doesn’t do is substitute for well thought-out planning, investigation and regular testing to continuously identify weakness and evolve maturity. True resiliency is a state of mind that can’t be backed up by a technology only approach
Earlier this week, SAP acknowledged that many of its customers are going to take far longer to move to S/4Hana, its next generation ERP system.
For over 20 years, Computer Weekly has looked at the complexities of implementing SAP ERP systems. It began towards the end of the 1990s, as companies began reporting that ERP failings had contributed to lower than expected financial results. One of the first businesses Computer Weekly looked at was Danish Hi-Fi maker, Bang & Olufsen. This was later followed by implementation issues at Volkswagen, WH Smith, Hershey, Goodyear and most recently, Lidl and Avon.
In the 1990s, ERP systems like SAP R/3 were part of business’ Y2K mitigation plans to replace mainframe systems, that could fail due to the date bug. Former Gartner analyst Derek Prior, began working at the analyst firm, in 1998, having previously worked in hardware sales. He says enquiries from Gartner clients at the time were not about issues of Unix hardware unable to run global business operations that previously ran on a mainframe, but the challenges of ERP implementation.
These challenges have led to two decades of lessons learnt, centres of excellence and best practices, which are helping organisations around the globe keep their SAP Enterprise Core Components ERP (ECC) system running.
Migrating away from this stable platform, which has become essential to business operations, is considered far too risky for many of SAP’s customers. In October 2019, Forrester published the Look Beyond ERP, in which analyst Liz Herbert discussed the challenges updating ERP system. “ERP customers are more risk averse. They are typically finance, operations, and manufacturing professionals who rightfully fear that the wrong kind of disruption in their systems could cripple the enterprise,” she says.
Then there is the elephant in the room, which takes the form of Oracle. The database company has had a partnership with SAP since 1998, and its relational database is at the heart of many ECC implementations. While S/4Hana is a modern ERP and has its own in-memory database system, migrating to this new database while implementing the ERP system, seems like a big bang implementation project. Such big bang projects were the ones going spectacularly wrong in the late 1990s. Let’s hope the remaining decade or so of ECC support that remains, is used wisely.
On 17 December 2019, the US District Court, Northern District of California, San Jose Division, dismissed the 2018 class action case brought by the City of Sunrise Firefighters’ Pension Fund against Oracle, which alleged securities fraud.
The pension fund has until February 17, to provide a list of people with written statements that back up its claim.
On the one hand, the case is a win for Oracle. But, in many ways, the evidence presented by a confidential witness, is a bit of an own goal. According to court papers posted on CaseText, the witness, a regional sales manager for Oracle, reported that he (and other sales teams) discussed their “audit-driven cloud deals.”
It is something IT buyers should be mindful of, whenever they receive a call from Oracle sales.
Oracle as a cloud services provider
During the earnings call for the company’s Q2 2020 results, CEO Safra Catz stated that total cloud services and license support revenues for the quarter were $6.8 billion, up 3%. Catz said these revenue streams accounted for over 70% of total company revenues.
Has Oracle, the top database company, moved its customers to the cloud, and is now achieving record growth?
Gartner’s latest Magic Quadrant for cloud infrastructure describes Oracle as a “niche cloud platform”. In its report of the cloud infrastructure market, Gartner notes: “Oracle is mainly targeting customers who want to run Oracle software on cloud IaaS, particularly those who prefer to run on Exadata appliances and bare-metal servers.”
So it is no surprise, the company has combined maintenance contracts and cloud subscriptions in its latest financial reporting. And there is anecdotal evidence it is also actively encouraging its sales team to grow this revenue stream. “We are seeing, as part of any audit settlement, strong pressure to acquire some Oracle Cloud products,” Robin Fry, a solicitor and legal director of Cerno Professional Services, who specialises in helping clients deal with software audits, recently told Computer Weekly.
While Oracle may indeed succeed, in enticing customers to buy its cloud platform, a recent poll by The Itam Review has found that IT buyers are treating cloud deals like shelfware – and not bothering to use them. But unlike traditional software, cloud infrastructure requires on-going investments in datacentre capacity, funded by customer renewals. The question prospective Oracle cloud customers need to ask is: how long will Oracle continue to invest, if it remains a niche cloud services provider?
Last week Microsoft chief, Satya Nadella, made the headlines, with an audacious plan to eradicate the company’s historic carbon footprint, by reversing all its emissions since 1975.
As political and business leaders head to the World Economic Forum in Davos this week, the risks associated with climate change and a fall in biodiversity are top of the agenda. The signals are there: the weather extremes in 2019, the fires across Australia, flooding in Lincolnshire, the decline of pollinators and the millions of hectares of Amazon deforestation.
Extreme weather, failure of climate change mitigation and human-made environmental damage top the latest Global Risk Report 2020 from the World Economic Forum. Børge Brende, president of the World Economic Forum has called on industry and political leaders to work across all sectors of society to repair and reinvigorate systems of corporation, to tackle deep-rooted risks. But there is a resurgence of isolationism and some political leaders lack the will to push through environmental policies that may be unpopular with voters or make their economy less competitive.
Then there is the hypocrisy of the major economies trying to encourage the developing world to make sustainable decisions. After all, the most successful countries have exploited the environment and natural resources without considering the future impact. What right do they have to stop developing countries from progressing?
Børge believes business leaders should step up to the mark, and lead, where the politicians have thus far failed. So for the CEO of one of the most successful businesses to commit to reversing the company’s carbon footprint, is significant. Detractors will argue that the tech industry is a clean industry, and Microsoft’s impact will not have a significant effect on tackling climate change.
But climate change has a direct impact on economic growth. For instance, how will agriculture be impacted by a decline in pollinators, droughts or flooding? Even financial services and insurance sectors will be impacted. Speaking prior to the start of the WEF, Peter Giger, group chief risk officer, Zurich Insurance Group, said: “If weather patterns change, how do I look at the probability of events occurring?”
There is also a growing realisation that people are empathising with brands that are aligned with their own values. In 2018, Apple CEO, Tim Cook asked the tech sector to consider what sort of world it wanted people to live in.
Apple has made the privacy of its users a top priority. Nadella wants Microsoft to be seen as an environmental champion. No one company or country can turnaround the environmental crisis that is unfolding. But Microsoft’s carbon reduction strategy will directly impact its global supply chain and may indirectly influence other manufacturers whose businesses are heavily dependent on its technology. It’s a small step, a drop in the ocean, but global warming is something the global community must address.
In this guest blog post, Satyam Vaghani VP/GM, IoT & AI, Nutanix discusses the way IT will need to change to support edge computing.
The rise of the edge will require a redistribution of the centre of gravity, as well as a major shift in compute paradigm from mainly “human-oriented” at the core (applications like email, surfing the web, social media and so on) to predominantly “machine-oriented” processing – as when collecting sensor data and using AI and analytics techniques to convert raw data into business insights. For example, enterprises will need to manage security and infrastructure lifecycle at thousands of mission critical datacentres, as well as hundreds or thousands of applications, most of which will require weekly, or even daily, refresh. It also calls for a much more distributed and interconnected approach between the core and hundreds if not thousands of edges to make sure they work as a holistic whole.
Take a holistic approach
In just a few years we’ve seen devices deployed at the network edge increase almost exponentially to process more data than all the public and private clouds of the world combined. Managing the volume, velocity, and variety of data at industrial scale, and refining it at the edge to get actionable insights, often under tight time constraints, are key problems which, as yet, nobody has fully solved although many are trying and making good progress.
IoT is never an edge-only or a cloud-only problem. Most IoT apps span the edge and the cloud with major implications for the security of both, not least because of the sheer size of the potential attack surface. Nothing can be assumed; new and innovative threats are being released all the time, making robust edge security with minimal oversight at the core an absolute must.
It may have gone unnoticed with the January 14 end of support deadline for Windows 7, but Microsoft’s 10 year old OS, had one last Patch Tuesday update. And , surprise, surprise, this included a critical security update for the CVE-2020-0611, that the NSA reported is a remote desktop vulnerability, which affects Windows 7 and newer operating systems.
In the past, Microsoft has remained committed to releasing the most critical security patches for unsupported operating systems, such as the Windows XP fix for the WannaCry attack, that afflicted systems around the world, including legacy hardware at the NHS. In February 2018, the Lessons learned review of the WannaCry Ransomware Cyber Attack report for NHS England reported that 80 out of 236 hospital trusts across England were affected; 595 out of 7,4545 GP practices (8%) and eight other NHS and related organisations were infected.
Organisations have had several years to migrate to Windows 10, which was released in 2015, starting the five year countdown to Windows 7 end of support. But, businesses do not generally shift from something that works well – like Windows 7 – to a new operating system, just because Microsoft has released a new version. Migrating large PC estates can take years, as older PCs are replaced with new ones, running the latest Windows OS. Certain applications and embedded systems, cannot easily be migrated to the new OS, and remain on an unsupported operating system, leaving them vulnerable to cyberattacks.
Could something like WannaCry happen again, with a vulnerability impacting legacy Windows 7 machines? Certainly every Patch Tuesday from now on will list critical vulnerabilities in Windows 10. How many of these also impact Windows 7?
“WannaCry was a clear example of the dangers that businesses can face when they are using software that has reached end of life,” says Ian Wood, Senior Director, EMEA Cloud & Governance Business Practice at Veritas.
Critical to health
Looking at the health service, due to device impact and criticality to clinical workflows and patient care delivery, many unsupported devices cannot simply be disconnected from clinical networks without severely disrupting operations. For example, MRI machines can be operational for over 20 years, far outliving their operating systems. The more devices there are running on unsupported operating systems translates into larger attack surfaces and indefinite exposure to cyber risk.
Data pooled across several hospitals from healthcare cybersecurity specialist, Cynerio, has found that radiology departments are most affected. Its research found that 40% of all connected medical devices run on Windows and almost 45% of devices like MRIs, CTs, and X-Rays run on Windows 7. These machines have particularly long life cycles. From this data, Cynerio estimated that over 20% of all medical devices run on the unsupported Windows 7 OS. Unsupported devices cannot be fully secured unless taken offline. “No device is risk free, especially network-connected devices. Medical devices are the weakest link: they are not designed with security in mind, have extensive lifecycles, and often cannot afford any downtime,” says Leon Lerman, Cynerio’s CEO and co-founder.
Nelson Petracek, chief technology officer at Tibco says that one of the issues in deploying and managing edge computing devices is how will the device metadata be managed and governed. Such metadata may contain the device location, manufacturer, date of installation and last maintenance date.
In this guest blog post Petracek discusses how the device topology and relationships can be managed and governed and how this representation can be kept in sync with the physical layout:
With respect to device metadata, there will usually be a catalogue or metadata repository included as part of the overall architecture. It’s most likely that it will be in the datacentre or in the cloud and will act as a centralised function. Not only will this catalogue provide a picture of what is deployed and where, but there may also be data that is needed from this catalogue by different areas of the IoT processing pipeline during device data processing.
When running logic, say at the gateway level, it may be necessary to draw upon reference data to make an educated decision. For example, various parts of metadata around the devices, including the manufacturer, when it was put into service, or its maintenance history might be required in order to complete the decision making process.
It is unlikely that organisations will want 100,000 locations hitting this central store for metadata every time a device activates. Instead, it’s more likely that metadata will be applied closer to the datacentre or to the cloud, especially when using it as part of the model or the rules.
Maintaining and managing the overall relationships between devices – the topology – is also critical. It’s key to understand where the device is, to what it is attached, and how everything is linked together. This information is key to understanding the behaviour of an IoT network, and can help in ensuring that decisions are determined and optimised in the correct context and state.
It is unlikely that organisations will want 100,000 locations hitting their central store for metadata every time a device activates
One well-known example comes from what you would see in a power grid. For example, if you’re a utility company responsible for distributing power to consumers, you are concerned with how power gets from the source to a meter attached to a house. Many pieces are equipment and thus devices are involved in this process, including the meter, transformers, substations, etc. When looking at a distribution network for electricity, you have a vast network of linked devices, and for a variety of reasons (safety, accurate maintenance, capacity, thresholds) it is important to have an accurate picture of not only the devices themselves, but also their relationships. If a new meter or line is added or a transformer is changed, the blueprints and recorded topology must also reflect this change. Changes must make their way back to the metadata management environment so that proper decisions can be made, both in batch (future infrastructure planning) and real-time (power delivery and restoration in the event of a failure).
There are complete systems for managing this, but it can be quite complicated. However, that level of complexity isn’t always required in order to achieve the capability, but some mechanism is necessary for capturing changes – whether it’s just through automated processes or a periodic introspection, or a combination thereof.
What’s actually out and deployed must be reflected in the device and metadata catalogue. We are not yet at the point where we can dynamically go out and automatically discover all the devices that are running everywhere along with their relationships and metadata. One can do some of this automatically, but there is likely some aspect that unfortunately is still going to be manual.
The shortage of Intel processors is having a knock-on effect on IT’s ability to upgrade whatever legacy Windows 7 PCs still remain in the organisation onto new Windows 10 hardware.
At the time of writing, Microsoft said there were no plans to delay the end of support for Windows 7. “We have been partnering closely with our OEMs to prepare,” it told Computer Weekly.
However, the supply shortage of Intel powered PCs through the channel to business customers has led to concern about planned shipments.
From Computer Weekly’s understanding of the situation, it seems the PC channel has prioritised orders that are for Windows 10 upgrades. This has been happening since the start of 2019, and so far, the demand and prioritisation has meant that any of the big planned rollouts appear to be continuing. Burt smaller rollouts, or new orders are being impacted,
Upgrading older PCs
While some Windows 7 PCs can be updated to Windows 10, these PCs tend to be a few years old, and close to the time they would naturally be retired. Updating them now, not only has the potential to make them unstable and run slower, but the time and expense of the update would be lost, if the machine is replaced in six months or so, when the Intel supply is set to return to normal.
A recent freedom of information submitted by Citrix to NHS trusts, reported that half of trusts are running Windows 7 machines. Similarly, there are reports on the internet suggesting that 1800 HMRC PCs are still on Windows 7.
As Computer Weekly has previously reported, some PC channel companies have been told by manufacturers to prioritise shipments relating to Windows 7 end of support.
But inevitably, some organisations will find they need new PC desktop and notebook equipment. One PC channel organisation Computer Weekly spoke to said: “We can’t give an assurance on new orders being here in the next six weeks – so it’s more impacting people that haven’t planned ahead or have relatively small requirements, as they’re hitting the back of the queue.”
Analyst Gartner has recommended that IT managers desperate to source new business PCs as part of their Windows 10 refresh, purchase AMD-powered hardware. Lenovo, for instance, says it has a wide selection of AMD-based ThinkPads available via the channel to suit the varying needs of business from SMBs through to enterprise and the public sector.
In the last few days Hewlett Packard Inc and Dell have posted results, in which their respective chief financial officers have spoken about the issues their businesses face with the supply of Intel processors. This affects the ability of these PC firms to fulfill orders of new PCs for enterprise customers and consumers of high-end PC devices, that tend to use the Intel chipset.
Struggle with Windows 10 refresh
In a transcript of the earnings call of October 26, posted on the Seeking Alpha financial blogging site, HP Inc’s CFO, Steve Fieler, discussed how the supply issues could impact the company’s enterprise customers, and their ability to refresh Windows 7 PCs with new Windows 10 machines. Microsoft is due to end support of Windows 7 on January 14, 2020, but the supply issues with Intel processors has meant that PC manufacturers are struggling to ship new Windows 10 PCs to their enterprise customers. “It could be that these current supply constraints actually indeed help prolong the Win 10 refresh. And so there’s a lot of dynamics going on. And that’s why I think the seasonal patterns are likely to be affected, both from a supply, but also on the potential extension of the Win 10 refresh,” Fieler said.
Worsening supply of Intel chips
Similarly, Dell admitted that the Intel CPU shortages have worsened quarter-over-quarter the shortages. In the transcript of the earnings call, posted on Seeking Alpha, Jeffrey Clarke, vice chairman, products and operations at Dell, said the supply issue with Intel processors, “is now impacting our commercial PC and premium consumer PC Q4 forecasted shipments.”
Looking at the response from Dell’s CFO, Thomas Sweet, on the question of how Intel’s processor supply issues, affects the ability of Dell to fulfill orders from its enterprise customers, it appears that about two-thirds of Dell’s enterprise customers have migrated from Windows 7 to new Windows 10 PCs. However, that still leaves a third of Dell’s enterprise customers still on Windows 7.
In the Seeking Alpha transcript of its Q3 2019 earnings call, Intel CEO, Bob Swan acknowledged the supply issues with processors saying: “We’re letting our customers down, and they’re expecting more from us. PC demand has exceeded our expectations and surpassed third-party forecasts.”
But failing its customers – the PC manufacturers – has a direct impact on every enterprise IT department’s ability to complete the refresh of Windows 10 before the Windows 7 end of support deadline. The ball is now in Microsoft’s court.