Should an enterprise’s robotic process automation (RPA) strategy include working with more than one vendor? In a recent interview I had with a global manufacturer of automotive parts about its multiyear RPA project with Redwood Software, it came to light the company was simultaneously using leading RPA vendor UiPath.
“You don’t want to put all your eggs in one basket,” said Anna Berger, functional design lead in finance at Faurecia, which has operations in 35 countries.
The multivendor RPA strategy makes sense for the automotive company.
Redwood Software has extensive experience in automating finance processes, in particular ERP systems from Oracle and SAP. Faurecia has worked with SAP for over a decade, rolling out a single, integrated SAP system to about 98% of its 300 sites; it has some 35,000 users in SAP.
“We have taken the position that for processes which are quite standardized and which are very linked to SAP, we want to go with Redwood, which is strongly integrated with SAP,” Berger explained. Faurecia is using Redwood on a major project to consolidate 25 shared services down to a handful of regional platforms.
UiPath and Redwood RPA projects follow similar best practices. For example, tremendous effort goes into making sure the process is accurately described and that everyone agrees on the process description. But the automation focus is different.
“UiPath is for topics which are less linked to SAP and which may have to go between multiple systems and [involve] processes which are less standardized,” Berger said.
Multivendor RPA strategy
Forrester Research analyst Craig Le Clair, who follows the RPA market closely, said a multivendor RPA strategy is becoming more common. “I see it trending that way,” Le Clair said. One reason for the movement toward multiple vendors is that RPA came in initially through the business side, which hired its own integrators.
As the IT side has gotten involved, business units that are happy with their RPA deployments don’t want to switch vendors, resulting in a multivendor RPA strategy by default. Secondly, the RPA platforms have different talents — increasingly by necessity.
“When you have a lot of the market value captured by the top three RPA vendors, the other 40 have to find spots,” Le Clair said, referring to Blue Prism, UiPath and Automation Anywhere as the triad at the top. A vendor like Redwood, which specializes in finance processes and has expertise in Oracle and SAP, has developed a library of bots to automate this area of operations, saving companies time.
The RPA platforms also tend to split between those that are focused on back-end functions where interaction with humans is minimal and those that are being deployed in call centers, where the automation must co-exist with an employee who may be jockeying between 15 and 20 screens. An example of the bifurcation in the market is Blue Prism, which doesn’t do contact centers, Le Clair said.
In any case, Forrester believes the RPA market is still on a strong growth trajectory for 2019.
Sophie Vandebroek, vice president of emerging technology partnerships at IBM, suggested companies start with scrutinizing their training data. “AI algorithms are only as good as the data used to train the system,” she said.
For example, an AI algorithm trained on data from a company that has more men than women software engineers might conclude that men are better software engineers than women, “which of course we know has nothing to do with the job and is irrelevant to the hiring decision,” Vandebroek said.
Checking the training data for bias is necessary but not sufficient for ridding AI algorithms of bias, said Gabi Zijderveld, chief marketing officer and head of product strategy at Affectiva.
Bias can also be subconsciously coded into an algorithm by developers. “We build what we know,” Zijderveld said. She recommended that companies strive for diversity when putting together their teams, as they can collectively act as an anti-bias failsafe. Besides, she said, diversity also “fosters creativity and innovation.”
CIOs can also help set the tone by investing in a strong culture, according to Nichole Jordan, managing partner of markets, clients and industry at advisory and accounting firm Grant Thornton LLP. “You’re bringing together a lot of individuals with different backgrounds — behavioral scientists, data scientists, sociologists — to work together,” she said. “You’ve got to be clear on your culture.” It’s important that a company’s ethics, values and acceptable behaviors are spelled out to employees.
The tech world has been abuzz after news that IBM is snapping up Red Hat for a whopping $34 billion, making it the third biggest tech-deal of all time.
Experts say, however, that the IBM-Red Hat deal will likely have little influence on enterprise IT cloud strategies, at least for now.
Ed Featherston, vice president and principal architect at Cloud Technology Partners, sees it as a move aimed at boosting IBM’s profile in the cloud space to give the company more credibility when it talks to customers about hybrid cloud.
“Without having done this, IBM would have probably faded away,” Featherston said. “I can’t think of a single client I have worked with in the cloud, whether public, private or on prem, that doesn’t have some level of the Red Hat stack in those environments, and now it’s [going to be] IBM Red Hat stack in their environments.”
Gartner analyst Dennis Smith believes the move will help both companies reset their cloud narrative.
Though Red Hat has had some initial success with the open source container application platform OpenShift, it needed some additional wind at its back from a scale and growth standpoint, Smith said. The deal also provides IBM with the potential to have a better story when talking to clients about their legacy applications, he added.
“Assuming [these clients] are a very loyal IBM customer and have a very large legacy footprint that they are looking at moving into some type of a cloud environment, they have additional options now,” Smith said.
IBM-Red Hat deal: Will it affect the cloud landscape?
During a media call earlier this week, Arvind Krishna, senior vice president of IBM Hybrid Cloud, said the acquisition redefines the cloud market.
“As our clients are moving to hybrid cloud, and they all use multiple cloud, they need the technology and they need a platform that lets them operate in that environment with security and comfort of portability,” Krishna said. “Together, that’s what we can get for them.”
But Gartner analyst Lydia Leong believes the IBM-Red Hat deal will have very little impact on the current cloud landscape and should not influence buying decisions related to public cloud.
Red Hat’s relative strength in on premises container solutions does not transform into any form leadership in the cloud market, Leong said. Containers are not cloud, she said, they are an infrastructure construct that customers may use as part of a cloud solution, just like they use virtual machines as part of a cloud solution.
Major public cloud vendors — AWS, Microsoft Azure and the Google Cloud Platform — all offer Kubernetes-based container services that come at no extra charge to their customers, she said. The customer pays for the infrastructure cost but doesn’t pay extra to run containers, she explained.
“Plus, the use of containers, regardless of whether or not that includes Kubernetes, doesn’t make applications magically portable,” she added.
But IBM has continued to look for ways to build business outside of its on-premises infrastructure, sales and services business. Having ownership of a Linux distribution that is used widely across all the cloud vendors could be an interesting way for IBM to build cross-cloud services, said Scott Cameron, Azure Principal Solution Architect at Insight.
“But I don’t see it immediately affecting most of our customers and how they deploy cloud,” Cameron said.
Should CIOs care?
At least to start, Featherston believes CIOs have nothing to be worried about. The IBM-Red Hat deal is expected to close sometime in the second half of next year. As long as IBM lets Red Hat be Red Hat — allowing the open source provider to continue with the way it has conducted business — the impact for CIOs, whether they choose to go with IBM or not, should be minimal, he said.
He also sees no reason for existing IBM customers to think about changing their cloud strategy because if anything, the deal will ultimately give them more flexibility.
“They can feel safer if they want to stay with an IBM cloud model,” he said. “But they don’t have to feel that it’s a risk if they go elsewhere because they are still going to be an IBM customer, they are still going to be running Red Hat in AWS [if they decide to go with Amazon cloud].”
Data and analytics are part of everyday survival for companies. But according to a new research note from Gartner Inc., organizations are struggling to manage the data they have let alone establish a plan for the data that’s coming.
“This fruitless trend will continue unless technical professionals act now in preparing for the future demand,” stated the “2019 Planning Guide for Data and Analytics” report, which published earlier this month.
To that end, Gartner analysts identified five trends that CIOs and information architects should pay close attention to in the upcoming year, one of which is how the use of artificial intelligence and machine learning will become useful information management tools.
The foundation of the report is centered around Gartner’s logical data warehouse or LDW, a data management practice born out of the big data era. Rather than build a single, central data repository, Gartner began recommending that companies create a single view of the data through a virtual data layer that connects data repositories scattered around the enterprise and in the cloud.
The analysts caution that there is no single blueprint for establishing such a flexible architecture nor will it happen overnight. Instead, the report states, “the shift will be gradual and incremental — but also inevitable.”
With an LDW in place, Gartner analysts now recommend that CIOs establish an analytics architecture that’s flexible enough to support both traditional as well as newer analytics techniques. The architecture should utilize a plug-and-play framework that can integrate external and internal data sources together and combine the data collection, analysis and delivery of insights into a single discipline.
The ultimate goal of such an architecture is to accelerate digital transformation and support an “analytics everywhere” environment, where analytics can be delivered even at the edge of the enterprise. The detailed report walks CIOs and information architects through all five trends and provides planning suggestions for the coming year.
One trend is that AI and machine learning coupled with an LDW can create a more intelligent way to manage data. The benefits are a two-way street: An LDW can provide AI and machine learning the high quality data needed to produce good results and the infrastructure needed for model deployment. And AI and machine learning can help manage complex workloads, query data efficiently, and size up both the data type and content as it enters the LDW, which the report described as “one of the most exciting areas of the market today.”
To get started in 2019, the analysts recommended that CIOs and information architects look for use cases such as workload management where introducing AI and machine learning could make a dent on performance or where there’s some kind of overlap between the LDW and AI and machine learning such as data quality.
In most enterprises, data quality processing is for data warehouses are separate from AI and machine learning work. CIOs may want to collapse those two efforts together and use “the industrial strength data transformation and quality tools used for the LDW for machine learning,” according to the report.
They also suggested that CIOs take stock of the kinds of tools that are already at their disposal. According to the report, “most commercial database management system software has useful libraries of the most popular ML algorithms. When new analytical requirements can be met by common ML algorithms, these incumbent libraries provide a simple and low-cost means of meeting analytics requirements.”
Earlier this month I wrote about lessons businesses can learn from the Facebook data breach that affected millions of users. News has now surfaced that Facebook is rumored to be shopping for a cybersecurity company to help boost their security operations and prevent another major hack.
I reached out to Bryce Austin, CEO at TCE Strategy, and Vijay Pullur, CEO at ThumbSignIn to see what they thought of this potential acquisition and whether acquiring a cybersecurity company makes sense for Facebook as the company tries to rebuild its reputation after recent security lapses.
Editor’s note: The following transcript has been edited for clarity and length.
If it moves forward with the acquisition of a cybersecurity company, what will it mean for Facebook?
Bryce Austin: It is an interesting idea for Facebook to purchase a cybersecurity company, as their cybersecurity issues appear to be a failure of imaginative thinking rather than a lack of the fundamentals of cybersecurity. People are using Facebook’s tools/features in combinations that Facebook never conceived of, and it was the combination of three different tools that led to the latest breach Facebook announced. If the purchase of an outside firm can help with that, then I’m all for it, but I hope they are looking for the equivalent of Disney’s Imagineers rather than a more traditional cybersecurity outfit.
Facebook has larger reputational issues with the fundamentals of their business model as opposed to issues with their cybersecurity. Facebook’s users do not fully understand the ways that their personal data is being analyzed and sold for Facebook’s profit. The recent discovery that Facebook is sending targeted advertising to the phone number provided by cyber-aware users for multi-factor authentication is a prime example.
Facebook needs to be open and transparent about their behavior and business ethics. Facebook has a choice to be an ethical beacon (similar to Volvo’s commitment to safety) or an ethical nightmare (similar to Volkswagen’s blatant hacking of U.S. diesel emission standards). They can use sensitive analytical information they are able to glean from their users for customers’ benefit, or to their detriment.
Vijay Pullur: The need for Facebook to make an acquisition has become imminent. It shows that they are taking security very seriously and that helps with improving the public perception. I see it as a sincere effort to fix security.
It’s not that Facebook doesn’t have security experts — they must have an army of security specialists –but this acquisition will help them bring in more competence, focus and deeper knowledge to the security domain.
But as bad actors become more sophisticated, it’s important to remember that security is not a one-time fix. You have to constantly ward off threats preemptively and with the use of machine learning, AI and new generation of software technologies.
Financial services firms must respond to changing customer demands, cultivate personalized relationships and maintain those ties across multiple channels. Artificial intelligence is often among the technologies underpinning digital transformation. When it comes to digital transformation for banks, however, projects can also focus on the more basic, yet critical IT components: the core, customer-facing website, for example.
That was the case with Federal Bank Ltd., which replaced a custom-built website built on Microsoft’s .Net framework with a new corporate site based on open source technology from Liferay Inc., a software company based in Diamond Bar, Calif. The change, along with a UI overhaul, has led to a 5x improvement in daily site visits. The increased traffic translates into more customer leads for the Aluva, India-based bank that operates more than 1,000 branches and ATMs. Continued »
Last week, Apple CEO Tim Cook delivered the keynote speech for a security conference in Brussels, Belgium, where he passionately advocated for the United States to implement a federal data privacy law similar to the EU’s GDPR.
Not only did he call for a data privacy law with teeth, he framed the issue as a battle between corporations that are using technological advances to “weaponize” consumer data for their own enrichment and those that recognize the need to respect consumer privacy rights.
Analysts and tech executives said they believe Cook’s clear call for legislative action will move the needle on data privacy rights.
“Influential figures like Tim Cook speaking out against the misconception that laws like CCPA and GDPR will hamper technical innovation and profitability is a powerful tool in changing the conversation about legislation,” said Jessica Ortega, product marketing associate at Sitelock, a web security vendor.
Christopher Plummer, senior cybersecurity analyst at Catholic Medical Center, said Cook’s stance on a federal data privacy law will not only have pull in federal and congressional debates, but with the general public as well.
“[This speech] is undeniably powerful — nearly half of Americans with a phone in their hands have one created by Apple, all of whom will positively benefit from the climate [Cook] is cultivating,” Plummer said.
Business community reaction
Ortega said she expects Cook’s advocacy for federal privacy legislation to meet resistance from the large segments of the business community — and not just from some of the big tech companies whose business models depend on collecting massive amounts of consumer data.
“Businesses and lobbyists have been vocal against more consumer protective regulations because of the cost involved with implementing changes to be compliant,” she said.
Dana Simberkoff, chief risk, privacy and information security officer at AvePoint, a GRC vendor, said she believes Cook’s speech has shined a much-needed light on data privacy rights.
“While consumers are increasingly aware that they have privacy rights that companies may not be honoring, it’s also clear that they do not quite understand what these rights are — or how they can exercise them,” Simberkoff said.
Unlike some, she also views the federal data privacy law as ultimately proving to be good for businesses. “This legislation will create more legal certainty for businesses and help build consumers’ confidence and trust in businesses’ privacy controls and practices,” she said.
Follow the GDPR model?
As for whether a federal data privacy law should follow the European model laid out in GDPR — Cook called the regulation “very successful” in his speech — some our experts said it was too early to pronounce GDPR a model of data privacy protection success.
“It is difficult to know whether or not the specific law is a success,” Ortega said, adding that it certainly “has changed the way we talk about data privacy and consumer protections.”
Rene Kolga, senior director of product management at security vendor Nyotron, said that although GDPR is in its infancy, it should be studied as a template for a potential federal law in the United States.
“It is clear that the U.S. does not take privacy nearly as seriously as does the EU. As the population starts realizing the scary powers that organizations now have based on your digital breadcrumbs, this will start to change,” Kolga said.
During the “Lunch with Robots” panel at HubWeek 2018, host Jim Tracy asked a question that reflected a major theme of the weeklong event: How is the workforce going to change in the next 20 years, and how can we help people transition?
Panelist Colin Angle, founder and CEO of iRobot, said it will be important for companies to develop and implement useful technology while still prioritizing human workers and their needs in order to recruit and retain talent.
For the audience of mostly grades K-12 students who were invited to attend the lunchtime session at HubWeek 2018, Angle had encouraging news: For every Massachusetts college graduate who gets a degree in IT and computer science, there are more than 15 potential jobs waiting for them.
Good-paying, stable jobs at that.
“iRobot [has] definitely seen some significant transformation. The job hopping mentality is giving way to desire for security, to understanding the loyalty, values, mission of [a company],” Angle said.
Kathleen Kennedy, Director of Special Projects at Massachusetts Institute of Technology, took a different approach to answering Tracy’s question about the rapidly changing, tech-driven future workforce.
“Who is afraid of robots taking over?” Kennedy asked. A sea of tiny hands were raised in the audience.
“Who works with robots? Who has ever seen or used one?” she followed.
Two adult hands went up — the crowd laughed at the discrepancy.
“We aren’t 20 years away from total AI disruption,” Angle said.
Though there isn’t an imminent takeover on the horizon, the next generation should prepare to be working alongside AI technology in the future.
Angle compared technology such as Rosie the Robot from the 1980’s comedy The Jetsons to the Roomba. The technology is similar– a bot that is programmed to clean and vacuum autonomously, but of course Roomba doesn’t have any of the dazzle and sass that came with the cartoon’s talking, humanoid maid.
Angle emphasized that we must consider the intended scope of technology like the Roomba when judging its success and failure, and when we think about how tech will change the workplace.
In 1980, robots completing concrete tasks were considered to be the future. But even after the completion and the success of the Roomba, the general public — conditioned by Hollywood versions of task-completing robots — is underwhelmed by today’s robot tech in the home, and likely will continue to feel that way.
Angle noted that although minor algorithms can be programmed in a shiny bot, a Rosie the robot maid remains many years out of reach.
Hollywood helped contribute to the promises about a world run by “generalized intelligence,” Kennedy added.
“We don’t have that technology,” Kennedy said. “Watson — perhaps the most famous robot — is just specialized intelligence. If you asked Watson to do something a four year old could do that it wasn’t programmed to do, it couldn’t.”
Timeslots of the future
For all the recent fear and trepidation of robotics taking over workplaces, the panic is not anything new, Kennedy said. She said experts and analysts have been predicting a 20-year timeline to “complete AI” for more than 70 years. But this time, things may indeed be different.
“You feel it in Boston, maybe a 20 year [timeline] is going to be right this time,” she said.
To compete in the future, Kennedy said companies considering AI implementation must create a culture of “collective intelligence.” Hierarchical decision-making has been the norm for decades, but technology has fostered community work — what Kennedy’s lab calls the “supermind” of shared values and norms.
“The idea of communities is rising,” Kennedy said to the young people in the audience.
As the young students left the panel and wandered back into HubWeek 2018, they had learned a non-Hollywood version of very powerful technology.
Digital transformation is becoming a reality for many organizations: IDC forecasts worldwide spending on technologies and services that drive enterprise digital transformation to soar past $1 trillion in 2018. As a result, CIOs must be more mindful of overall business needs when implementing digital technologies or enhancing IT systems.
In a recent conversation with SearchCIO, Stephen Hamrick, vice president of product management at SAP Jam, explained how enterprise digital transformation is advancing the CIO role and why they should focus on aligning their technology strategy with business goals. Hamrick also shed light on the importance of adopting agile business practices to help drive enterprise digital transformation projects forward.
Editor’s note: The following transcript has been edited for clarity and length.
How is the CIO role evolving in regards to enterprise digital transformation?
Stephen Hamrick: As companies continue to digitally transform, one central aspect of that enterprise digital transformation is really about business agility, and that means that employees are also expected to move more quickly than ever before. In a lot of cases, they have greater access to even more data than they ever had at any time. In that respect, the role of the CIO then is really about understanding where the business wants to go. Digital transformation isn’t just simply about swapping out one system for another; it’s about how to align the technology strategy with the business strategy really well.
More than ever before, CIOs really need to be focused on their processes around aligning with business needs and identifying opportunities where IT systems enhancements can make a significant contribution to a company’s digital transformation needs.
That does mean that CIOs really have to understand how their company operates, what the key indicators for success are and how can they use technology. [When it comes to IT procurement], it’s easy enough to find people in finance and procurement that can do contract negotiation for CIOs and it’s probably better to not have the CIO directly negotiating those contracts. But if CIOs don’t play their role in being that person that helps to translate the business needs into technology strategy, then they’re really not leading the function of the CIO properly.
In fact, it can be very costly and damaging to the business if CIOs make those wrong choices. They can end up painting themselves into a corner and impeding the business’s process of digitally transforming. That’s because they are focused on getting the best procurement, best solutions, but not really aligning that solution to the business needs of the company.
Read the IDC report on enterprise digital transformation spending here.
How do CIOs drive service level agreements (SLAs) in a multi-cloud environment? That’s an important question when at issue is an end-to-end IT service involving many cloud providers, and the goal is to provide users with a seamless — and consistent — customer experience.
Andy Sealock, managing director of the sourcing advisory Pace Harmon, said the answer will be familiar to any CIO who has used a multi-vendor outsourcing strategy: Namely, keep your IT service management (ITSM) program in-house and use that governance framework to drive standardized SLAs.
Back in BC (before cloud) times, tapping multiple “best-of-breed” vendors to provide IT services came to be regarded as better than doing a mega deal with a single vendor. Better for the IT department and better for the business. “You wanted to be able to send your service desk to the best service provider, your data center to the best data center provide, your network to the best network provider,” Sealock said. The one caution for CIOs in these multi-vendor IT service deals: Do not outsource accountability.
The case for ITSM in a multi-cloud environment
Indeed, a multi-vendor outsourcing strategy only worked well for CIOs whose IT organizations had standardized ITSM processes, Sealock said. Successful organizations hardened SLAs by insisting that each of the different vendors plug into and comply with their internal ITSM processes.
A multi-cloud environment replaces that outsourcing strategy.
“Now you have a SaaS solution over here, you’re using IaaS from AWS and Microsoft Azure to do compute — and a host of other point solutions out there: Commvault for back up in the cloud and somebody else for unified communications as a service, and somebody else for disaster recovery as a service,” Sealock said.
“You need to use your ITSM process to stitch these different point solutions together, so you can provide an end-to-end-service to your users,” he said.
By relying on their IT organization’s ITSM processes, CIOs can give their users SLAs for a service, even though each of the different cloud providers has different metrics, e.g. an availability level of five-9s, or four-9s, or three-9s.
Multi-cloud environment SLAs: It’s not about finding the mean
How exactly is that cloud SLA derived when you have a multi-cloud environment? Well, that’s tricky. Or rather, it’s a risk management challenge.
“You don’t necessarily regress to the mean or the lowest level. You have to look at the probability of what the service level response is going to be from each provider,” Sealock said. And that SLA might be different from the SLA the cloud provider agreed to in a legal contract. And, he added, it may be different from what your internal architects decide when they “pop the lid off” and see how the service is provisioned.
“To some degree you have to decide how much risk you want to take on, so it’s not a straightforward answer,” Sealock said.
For more advice on how to use ITSM to enable the cloud, read my two-part interview with Andy, “Using ITSM best practices to optimize cloud usage,” and “Use an ITSM portal as gateway to cloud services.”
Also, stay tuned for more advice on making ITSM SLAs work in a multi-cloud environment.