Recently, as I was reporting on cloud computing’s effect on the CIO, I kept running into the same idea: The job of CIO is not the technology job it once was.
In fact, Forrester Research’s Bobby Cameron said the last time he and his colleagues looked into how many CIOs had a tech background, it was 50%. That was years ago; today, the percentage has probably gone down.
“We quit asking the question because no one paid us to ask. It’s just — no one cared,” Cameron said. Today, the job of CIO is not primarily a technology job, he stressed. Certainly, though, CIOs “need to understand the power of technology.”
Cloud computing has shown them that power, taking the nitty-gritty IT of operations out of the data center and giving it to the cloud providers — what Cameron calls a disassociation from the “low-level stuff.” That has freed up CIOs and IT to help the business create value for customers — and quickened the long-discussed transition of the CIO from tech arbiter to business partner.
Cloud has done something else to influence the shifting job of CIO, said Shashank Dixit: It has opened the gates on “the data deluge.”
Dixit is the CEO of Deskera, a business management software provider headquartered in Singapore. For years, Dixit said, CIOs have focused on information, which he characterized as “highly distilled data” residing in their companies’ business and IT systems. Now, with waves of data flowing in from multifarious sources, CIOs need to refocus their energy.
“I’ve seen CIOs move from silos and walled gardens to the complete openness of today, where you can use a phone and you’re plugged in using various devices, and then you have applications you can use from any device,” Dixit said.
And customers are using those devices to contact the companies they do business with — often through social media platforms such as Facebook or WhatsApp. That constant stream of data, easily created in and moved through the cloud, has made CIOs “data officers more than information officers,” Dixit said. (The debate over whether they are, especially in the presence of an exec with the title chief data officer, has been on for years.)
They now must chaperone that data, building a comprehensive policy to secure, manage, process and analyze it, Dixit said. If they don’t, and they infuse their business processes with popular new technologies such as AI, which depend on reams of data to work well, “it’s like lighting a powder keg,” he said. “You’re going to have a lot of data coming your way, and you’d better have a plan to deal with that.”
Getting technology in place is still “very high up” the task lists of CIOs, Dixit said, but the cloud has made procuring new applications or business processes so easy for business users, the job of CIO is no longer about serving as tech gatekeeper.
“They’re no longer in control. The user within the enterprise can decide to use an application that they would want to use,” Dixit said. “So the CIOs have to now move away from taking orders on what to buy and where to buy it from to taking care of the data.”
Because of data and compute power requirements, training a deep learning algorithm doesn’t typically happen on so-called edge devices such as smartphones or drones. But Neurala Inc., a deep learning startup based in Boston, is looking to change that with its lifelong deep neural networks or LDNNs.
The fledging firm claims its deep neural networks get around the traditional data and compute constraints by adding to the data set at the edge, thus enabling mobile devices like drones to learn literally on the fly. Its initial project was funded by NASA and focused on space exploration, but when I mentioned the Neurala to Forrester Research analyst Mike Gualtieri, he put his finger on a more terrestrial use case for its brand of deep neural networks.
“Imagine a self-driving car,” he said. “And imagine that one of its models is used to recognize speed limit signs.” And in fact, you don’t have to imagine too hard because a German auto manufacturer did just that, Gualtieri said.
It trained a model to interpret speed limit signs by feeding it lots of examples of speed limit signs, and it tested its self-driving car on the Autobahn — where it encountered a curious problem: The self-driving car was slowing down automatically for reasons that weren’t obvious.
“They finally figured it out,” Gualtieri said. “It was because sometimes the cameras on the self-driving car saw what it thought was a speed limit sign but was actually a decal on the back of a truck that said, ‘I don’t go more than 60 kilometers per hour.'”
The training of the model wasn’t accurate enough to differentiate between a decal and a speed limit sign. In cases like this, the testing company brings the model back to the lab to be retrained to distinguish between speed limits signs and decals.
“So the question I would have for this company is, alright, given that scenario, how would you train that model?” Gualtieri said. “You’d have to have some sort of user feedback or some other feedback for it to learn.”
So I asked Neurala. “It actually does require a human to be in the loop,” said Heather Ames, COO and co-founder of the Boston-based startup.
Neurala’s lifelong deep neural networks don’t self-correct autonomously. In other words, the learning that takes place on the edge is supervised learning. A human helps decid
e what the lifelong deep neural networks should learn and helps to correct a system that isn’t performing accurately.
“The human operator would have some sort of user interface to modify learning,” Ames said. “So to correct it, either through reinforcement: ‘Yes, we want to slow down when we see this sign.’ Or: ‘No, we don’t want to slow down when we see this sign. When we see the decal sign, we want to just follow at a safe distance rather than slow down.'”
In the case of a driverless car with a driver behind the wheel, she said Neurala would never advise a driver go hand’s free to train the software and create unsafe driving conditions. Indeed, the application to solve this problem would have to involve a lot of user experience design work to figure out the best way for an operator in a car to supervise a deep neural network safely.
But the point is: “We’re a long way from full autonomy, particularly with the systems we build,” she said.
Tony Abel, a managing director at consulting firm Protiviti, met recently with SearchCIO to share some of his experiences in delivering robotic process automation (RPA) services to the firm’s primarily Fortune 100 clients.
Robotic process automation technology, sometimes referred to as software robots, automates high-volume, repeatable tasks done by humans, such as queries, calculations and maintenance of records and transactions. The software is designed to mimic how employees interact with their computer interfaces to complete these rules-based tasks, from logging in to the relevant applications to capturing and entering data, performing calculations and logging out.
RPA technology, while not new, is emerging as a major new driver of business process efficiency, with costs savings of up to 300% and more. The eye-popping ROI, say experts, is due in part to the new software platforms with AI and machine learning capabilities that allow these bot-enabled gains to scale from one-off projects to enterprise-wide transformation. Protiviti partners with three of the major technology platform providers in the RPA market space: Blue Prism, Automation Anywhere and UiPath.
Abel comes to the RPA services field armed with many years of experience in supply chain process improvement, where management approaches like Lean and kaizen drive optimization. Any RPA engagement, Abel said, begins with first identifying processes ripe for improvement. “We really try to lead with the business problem. We don’t go in with a hammer looking for nail.”
Here are edited excerpts from my briefing on Protiviti’s approach to RPA services.
Questions for Protiviti’s Tony Abel on RPA services
Do you improve the business process first or automate the existing process as is, figuring any bot will be more efficient than humans?
Tony Abel: It is an interesting question. RPA is certainly not a capability or approach you slap on a broken process. The way I describe it with clients is that there is some level of standardization required. You may not take it from its current state to a fully efficient process — the goal of automation — but you do have to standardize that process to the degree that once a bot is developed and implemented against that process, it’s running consistently. You can have some variations — and bots are intelligent enough to know, “When it looks like this, do it this way, when it looks like that, do it another way.” But you don’t want too many variations in the process, primarily because of the data that sits behind the process. Bots do really well with structured data; with unstructured data they tend to kind of fall on their face — there’s lots of exceptions dropping out; it requires so much manual effort just to administer the bot that it becomes not that effective in driving efficiency, which is what you’re after.
So RPA is somewhere in between of what you do with more traditional ways of improving a process and just slapping a bot on top of a broken process and hope you get the efficiency.
What’s a common example of process standardization that companies do to get bot-ready?
Abel: So, in a lot of cases, it’s dealing with things like one business unit processing invoices differently than another business unit. Certainly, you can build individual automation within each of those processes, but the real value is if you can standardize them. Let’s assume for a minute they are not all that different. If you can get them to operate their invoicing processes more similarly, then you implement a bot that runs at a much higher level across those two divisions, as opposed to point solutions within each. That’s one level of standardization I talk a lot about. And then, frankly, the quality of the data that underlies the process is probably the most important driver of getting to an effective bot.
I’m sure it varies with every organization, but can you a give a sense of how long it takes to standardize a process to the point where it operates consistently?
Abel: It does depend on the company. We’ve done proofs of concept for RPA services in a matter of weeks for straightforward use cases. An example is user provisioning in an IT department. There’s usually not a ton of standardization that happens in that process, and the rules are pretty easy to identify — most of these IT organizations have this documented. That allows for a pretty quick few weeks to get a proof of concept out, watch the bot run in a test environment and start thinking about how you can get it into production.
When it’s a much more comprehensive process — I used the invoice processing example — anything in that realm does tend to get much involved. Let’s say we’re looking at a three-way match business process of a purchase order (PO), a receipt and an invoice. There tends to be a lot of exceptions in that process. Here’s one: a company has received five of the 10 widgets it ordered on the PO, but the requester has now decided that all it wants is five, so how do we, as a business — in an automated fashion — add awareness that this is the case, versus, say, the an automated process for the company that is still buying all 10 widgets?
So it takes a little bit more process definition work up front when it’s a comprehensive process like that, and that can take some time, because you really have to build those rules to be pretty precise. And where they can’t be precise, then you need to know that it will require some subjective decisioning — and that’s where the bot has to stop. The bot can take you through all the finite rules-based logic very quickly, very efficiently, but when it gets to the ‘Now we need a human to weigh in and make a subjective decision,” the bot has to stop, the human action has to be made and the bot can then be fired up for the rest of the process.
What’s the ROI on a successful RPA implementation, ballpark?
Abel: We’re probably seeing that a bot in an automated process is 70% faster and more efficient, and more cost-efficient. than a human performing the same activity.
Serverless computing may be the newest, shiniest gadget in the developer’s toolkit: It’s a cloud service; it runs only when triggered by an “event,” like a click on a website, so it doesn’t rack up costs; and best of all — it hides all of the underlying server management.
But serverless computing architecture won’t replace containers or other, more established methods of building applications.
“Most IT shops don’t have the luxury of every time a new technology comes along, ‘Let’s rewrite everything to this new way of doing things,'” said Rich Sharples, senior director of product management at open source software company Red Hat. “What you end up with in typical enterprises is different generations of technology, and they are going to live together in harmony.”
The ops side of DevOps is another matter. Serverless environments will essentially be “ops-less,” said Sharples, a former developer, because the operations are taken care of by the cloud provider.
It’s a good thing to have developers who understand operations, he said, but developers need to churn out applications fast. “And if that means you’re creating a level of abstraction to hide and completely automate a lot of this, then yeah, I think everybody wins there.”
Sharples spoke to SearchCIO about serverless computing architecture in an interview, which was published as a Q&A this week. Here is more from that conversation, edited for brevity and clarity.
Where does serverless computing architecture fit into the application development landscape?
Rich Sharples: This idea that serverless is going to kill containers — I think that’s really nonsense. There are well-accepted use cases and usage patterns for both of these, and I already know they are going live in harmony. Most IT shops don’t have the luxury of every time a new technology comes along, ‘Let’s rewrite everything to this new way of doing things.’ What you end up with in typical enterprises is different generations of technology, and they are going to live together in harmony.
So we’re going have containerized monolithic applications — sometimes decomposed into microservices, using traditional containerized microservices; sometimes using serverless. And all these things are going to work happily together because we spin them together using well-defined APIs. This whole idea of APIs is the way to interact with software gives us that insulation control.
So when I’m thinking about building net-new applications, moving to a microservices model, I’m likely going to have a mix. Some of that application logic is going to be a really good fit for implementing serverless. For some of it I want a more traditional, long-running microservice running in a container. And from the outside, from the consumer perspective — the consumer of the service — they don’t need to know whether it’s implemented as serverless or a regular, long-running microservice.
It’s really just another choice for developers — modernizing existing applications or building net-new applications. Nothing ever dies in IT — they will just keep accumulating new generations of technology.
How does DevOps fit into serverless computing architecture?
Sharples: From a developer perspective, it almost becomes ops-less. I mean, the whole serverless term is kind of stupid, right? It is a server. There is infrastructure. There is somebody taking care of that stuff. So there’s still an operational focus there. It’s relatively hard to find really good developers who also can handle operations at scale.
For some things, it absolutely makes sense, and I think organizationally, from a lack-of-ownership perspective, developers understanding the role of ops, understanding some of the concerns around security and operational effectiveness, is good: It helps to create better applications. But at the same time, developers need to get new products and get new features out very, very quickly. And if that means you’re creating a level of abstraction to hide and completely automate a lot of this, then yeah, I think everybody wins there.
As much as anything, there is a massive drive around automation. People have realized that to continue to move ahead and to be able to even just keep the lights on, they’ve got to invest in automation and things like Ansible and OpenShift. So I think the term ops-less, or less ops — more automation and fewer people. We’ve been on that drive for 10 years. And I think serverless is just a good example of how far we’ve come.
The Gartner Data & Analytics Summit I’ve been writing about recently was filled with prescriptive advice for data analytics leaders. Much of that advice, unsurprisingly, was focused on the red-hot topic of enterprise artificial intelligence and how AI technologies — from natural language generation to deep neural networks — are poised to radically disrupt enterprise analytics programs.
Just thinking about how to cover this new reality for our CIO audience was enough to make this reporter sweat. I can’t imagine the pressure enterprise leaders in charge of this stuff must be feeling. Fortunately for those who attended the event, a session by Gartner distinguished analyst Whit Andrews sought to allay the concerns of digital leaders still in the early stages of forming their artificial intelligence strategies.
Andrews, who sets the agenda for Gartner’s AI practice, began by giving a simple definition of AI that differs from Gartner’s. He calls it his “starting point” definition:
“AI projects grant organizations superpowers to classify and predict in ways workers can’t on their own.”
That’s the definition digital leaders should keep uppermost in mind when jumpstarting an artificial intelligence initiative in their own organizations. It will remind them that AI should not be used to reinvent the wheel but to invent.
Andrews also wanted the audience to know a “real truth” about artificial intelligence strategies that tends to get drowned out by the relentless buzz. Their organizations are probably not behind on AI, despite what events like this and the endless coverage in the press might suggest. He pointed to a Gartner CIO survey showing that only one in 25 CIOs are employing AI today in their organizations. Only five in 25 CIOs said they were in a short-term planning stage or actively experimenting with AI, according to the same survey. Only six in 25 are even in the medium- or long-term planning stage with AI.
Gartner’s Whit Andrews said the first thing to do with AI is to address something you’re already trying to do. The key to AI right now is to use it to address reasonable and possible goals — not moonshots. . . . #gartnerda #gartner #ai #artificialintelligence #enterpriseai #augmentedintelligence #techtarget
Of course that doesn’t mean CIOs and digital leaders shouldn’t get rolling on developing and implementing their artificial intelligence strategies. Andrews said the first thing they should do is to go after “historical desires,” that is, use AI to address something they’re already trying to do. The most successful AI use cases come from organizations who first address reasonable and possible goals — not moonshots, he added.
In general, Andrews said organizations starting out with AI should aim for fairly “soft” outcomes, such as improvements to processes, customer satisfaction, products and financial benchmarking.
“True ROI is hard to calculate, which can create a barrier to AI experimentation,” he said.
Other advice for digital leaders for fleshing out their artificial intelligence strategies are as follows:
- Plan for the transfer of [AI] knowledge from external service providers and vendors to enterprise IT and business workers.
- Choose AI solutions that offer means of tracking and revealing AI decisions by using action audit trails and features that visualize or explain results.
- Deploy AI to solve challenges in which you lack the resources or corporate worker base to succeed.
- Document applications that can improve through training.
- Fool around with some AI technologies and solutions for pure learning purposes, not ROI.
- Develop a training, hiring and sourcing plan to build AI capability over the next three years.
Imagine asking Cortana about your revenue last quarter.
That could be the future, according to Gartner analyst Svetlana Sicular. At the Gartner Data and Analytics Summit, Sicular discussed some of the technical challenges that still need solving before voice AI assistants can approximate human-to-human conversation. (For example, AI voice assistants are still pretty bad at understanding context, such as how time of day might change the meaning of what is being said.) She also offered advice to digital leaders on how to incorporate conversational AI technologies into their artificial intelligence plans. And she touched upon something else: the coming onslaught of business applications for AI voice assistants, courtesy of every AI and AI-adjacent vendor out there.
“Every single participant [in the voice AI market] is preparing for the business,” Sicular said. “Every single one is creating business capabilities.”
The capabilities of AI voice assistants are still nascent, she said, and definitely on the simpler side — booking a conference room, scheduling a meeting for multiple people or reading emails. But Sicular foresees a time in the not-so-distant future when voice AI assistants in the workplace — or “employee assistants,” as she referred to them — are not just booking meetings but are an active participant in those meetings.
In those meetings, voice AI assistants could relay business data and analytics to employees. It’s all about “making your analytics talk,” Sicular emphasized. These employee assistants couldn’t voice a report of thousands of lines at once, she noted, but they could convey top insights and changes in the data.
One way in which some financial institutions and other institutions are experimenting with voice AI assistants is for compliance, Sicular said. Employees ask their assistants “Can I do something or not? Is this against corporate policy?” That’s the sort of a narrow use case for voice AI technology that IT leaders should be thinking about, she said.
AI voice assistant security conundrums
Unfortunately, since AI voice technology was designed initially for personal use, not for corporations, digital workplace assistants open up a flood of security questions, Sicular said. What if the text is read aloud and it’s sensitive information? How do you secure that information? Since there will be multiple users at meetings with different levels of security clearance, how does the device authenticate those users by their voices?
Questions about where data associated with voice AI technology should be stored and who owns it add another wrinkle to the security strategy for AI voice assistants. IT executives will have to deal with questions of encryption and access, Sicular said, noting that voice AI providers like Google do provide some security and encryption for company data in their enterprise editions.
AI voice assistants in the workplace will also require a culture change, Sicular said, which is why she said organizations need to provide an environment for employees to adapt to a voice-saturated workplace and to learn continuously. This will be especially important when the AI voice assistants take on daily tasks and routines.
Sicular said there are many choices for organizations looking to bring voice AI into their workplaces, but said that “Cortana owns the corporate desktop and Windows.” Microsoft has been slowly adding more email intelligence to Cortana and the AI assistant will soon be on Office 365 apps, reading aloud emails and performing other tasks.
Microsoft Teams, the company’s workplace collaboration tool and Slack competitor, will also be integrated with Cortana, allowing employees to easily make a call, join a meeting or add people to meetings using natural, spoken language. Employees using Teams will also be able to record meetings, create an automatic transcription of what was said during the meetings and save the meetings to the cloud.
For CIOs, this part of the Sicular’s message at least is crystal clear: Voice AI is the future, and it’s coming to your workplaces sooner than you think.
It took a nation-state attack for Alan Levine to realize the importance of implementing a cyber awareness program.
“I believed that cyber awareness training was useless because I believed my users were probably untrainable,” Levine, cybersecurity advisor at Wombat Security and a former CISO at aluminum giant Alcoa and its spinoff Arconic, said. “What I learned was that it was a pivotal and critical part of my cyber defense strategy.”
At the recent InfoSec World conference, he shared his story of how a cyberattack on Alcoa converted him into being a strong proponent of security awareness trainings.
In 2008, while he was the CISO at Alcoa, Chinese hackers created an email account claiming to be that of then Nissan CEO and Alcoa board member Carlos Ghosn and sent an email to 19 senior Alcoa employees. The message included malware in an attachment disguised as an agenda for the company’s board meeting. Users were tricked into downloading the malware, allowing the alleged hackers to gain access to Alcoa’s network. They stole nearly 3,000 emails containing sensitive information that included internal discussions about a partnership with a Chinese state-owned enterprise.
“I can tell you that we had just about everything in place that we could to prevent against a cyberattack, with the exception of a formal cyber awareness program,” he said.
Levine realized it was time to deploy a cyber awareness program that was formal, structured and measurable. It would also give his organization an additional level of confidence that all those users that he believed were “untrainable” might have a chance to do the right thing the next time they received phishing emails, he said.
“We looked at acquiring a program. I wanted to be able to test the condition, and retest certain users whom I call my ‘problem children’ who would fail test-phish after test-phish. I also wanted to know who was refusing to take training,” he said.
Before deploying the cyber awareness program, they had test-phished 650 employees in the CFO’s department and 66% fell for the phishing scam, he said. After training and retraining the same group of people, that number came down to 16%, he added.
The program helped users become aware of phishing attacks, he said. Over time, more users learned how to identify a phish and know what to do when they saw one, he added.
“It was an incredible turning event for me, as a CISO. What I learned was that all of my budget was going towards my first line of defense … I wasn’t spending a penny on my users, my last line of defense,” he said. “If a user receives an email that has a link or an attachment, that user has a binary choice: to click or not click. If they made the wrong choice, that next attack would be successful and unauthorized exfiltration would be possible.”
CIOs, if your company has a chief data officer, read no further. But if the “I” in CIO includes data, then your job will be expanding again. Yippee?
Here’s the crux: So-called knowledge workers, or people who create new knowledge by looking at spreadsheets and digging for data — according to one definition I heard recently — are becoming ever more essential to the modern digital enterprise.
The problem is that this level of data prowess is hard to come by in the enterprise. Gartner has identified the lack of data literacy as one of the four major challenges preventing companies from capitalizing on data. So, on-the-job data literacy training is needed to ensure that employees have the requisite skills to do their data-intensive jobs. How does this happen?
At the recent Gartner Data & Analytics Summit, I heard an inspiring story from Jenifer Cartland, who has seemingly figured out how to do data literacy training. Cartland is the administrator for data analytics and reporting at Ann and Robert H. Lurie Children’s Hospital of Chicago and the director of the hospital’s Child Health Data Lab. She is also a research associate professor of pediatrics at Northwestern University’s Feinberg School of Medicine. And she is one very determined data literacy educator.
A new data regime
Cartland told the audience of data experts that she started thinking about the need for data literacy training at the hospital a few years ago after her team had launched a data analytics initiative underpinned by the concept of self-service. No matter how much data she and her team put at users’ fingertips, however, or what kinds of tools they invested in, efforts fell flat. People either were not equipped to handle the data, or they didn’t know how to operate the system or the training was too complex or too simple.
“We had to step back,” she said to the packed room and laid out the step-by-step actions her team has taken over a two-year period to start righting these data literacy wrongs. Her prescription, which I wrote about in detail here, included surveying users about their information needs and technical data skills; establishing an analytics center of excellence; starting a community analytics blog; offering office hours to “high-need” users; creating a short course in analytics; and developing a seminar series.
Here are two seminal decisions made by Cartland: First, to propose — and be granted the right — to oversee the job descriptions for analysts sent out by hospital departments outside of her analytics organization; and second, to have many of the existing departmental analysts co-report to her organization, the newly formed Analytics Center for Excellence. Setting standards for new analytics hires and establishing a place where “light quants” and “homegrown” analysts from across the hospital system could find a home, be supported, trained and sent out to help promulgate a common language for data, she believes, will prove critical to lifting the quality of analytics enterprise-wide.
Who owns the data?
At the end of the talk, it was Cartland’s co-reporting structure — this notion of her team owning analytics across the enterprise — that piqued the interest of a number of the data executives in the audience. At many companies, a questioner noted, there are struggles about who owns what data and who controls its use. “There’s a lot of politics involved,” he said. How did Cartland pull it off?
One important factor in her ability to take charge of analytics, it seemed, was that the CIO was out of the picture. Cartland said that when her department was formed five years ago, the decision was made to put it under the chief medical officer rather than with the then-CIO (no longer with the hospital), who didn’t want it under his aegis. “Reporting and analytics were not done at all by Information Management,” she said, “so there was a gap for us, relatively speaking, to claim that space.”
That missing-in-analytics IT department recalled a remark that bothered me during the opening keynote of the Gartner conference. The Gartner analysts were extolling the vital importance of the role an analytics leader in the enterprise. In an economy awash in data and driven by data, “speaking data” must become a second language for knowledge workers, or companies will fail. Analytics leaders must take up the gauntlet of data literacy training, because if not them to the rescue, then who?
One of the analysts referenced the annual Gartner CIO survey, noting that for 12 of the last 13 years, CIOs had identified BI and analytics as the CIO’s top priority. “As much as they talk about it, in practice, data and analytics is not a CIO’s top priority,” he said. “Security and infrastructure will always be the biggest fires that the CIO needs to put out.”
That gave this longtime CIO reporter pause. While putting out infrastructure and security fires is certainly a big and important job, data is what’s making businesses go round. All hat and no cattle when it comes to data doesn’t seem like the right fit for CIOs, in particular at companies that do not have a CDO. Better to hire some firefighters and get your head in the data game.
For 89% of companies, one cloud is not good enough.
That’s according to a 2017 Forrester report on cloud adoption. Forty-eight percent use five cloud providers or more, and 41% have two-to-four providers, leaving 11% with just one. Fifty-nine percent say their cloud strategy is hybrid, typically used to describe a combination of public and private cloud use.
Companies are relying on multiple cloud services and systems to lower storage costs by switching from on-premises to cloud, increase “resilience,” or availability in case of a disaster or outage, and give users the specific services and tools they need.
Presiding over a jumble of cloud and on-premises systems and trying to manage them all as one computing environment is a cloud challenge CIOs are now coming to terms with, as I’ve showed in reporting over the past year. The Forrester report also illustrates a parallel reality:
“Increasingly, companies realize that the cost and agility benefits they enjoy from using multiple clouds can outweigh the additional management complexity that multiple clouds create.”
But managing the complex environments that organizations have today — multiple public cloud services, private cloud systems and on-premises servers — is a significant cloud challenge, said Bobby Cameron, a Forrester analyst. I spoke to him for a story published earlier this month that explored the effect that cloud computing is having on the role of the CIO in organizations today.
Seeking common APIs
Most companies are just monitoring their multi-cloud and hybrid environments — sometimes referred to as hybrid IT — “trying to understand when it’s not working and get some handle on it.” So they’re using a cloud management tool to check on things like how much computing power is being used, what the storage capacity is, whether the network connectivity to the cloud is sufficient and whether public cloud subscription costs are spinning out of control.
All that is good and necessary, but the “second stage” of hybrid IT management is simplifying it, Cameron said — making it easy to mix and match the pieces of heterogeneous environments and move applications written for the private cloud to the public one.
“Some of that has got to wait on the vendors — a common set of APIs to get after similar facilities in each of the different stacks isn’t there yet,” Cameron said.
Hybrid to the rescue?
Microsoft made progress in facing the cloud challenge last summer, with its rollout of Azure Stack. The service lets customers use Microsoft’s Azure cloud technology on its own servers, bridging the gap between its public and private clouds.
Some cloud vendors have formed alliances with others on hybrid cloud management — for example, virtualization software company VMware linked with Amazon Web Services, enabling operations on VMware to move to the AWS public cloud. And there’s a partnership between VMware and IBM Cloud. That might work for customers juggling VMware, AWS and IBM, “but you’re leaving out Google and Azure,” Cameron said, referring to Google’s cloud offering, Google Cloud Platform.
He likened today’s hybrid IT cloud challenge to the early days of the data center, when handling multiple vendors of equipment was “hard as hell.”
“The simplification is really the next step. That’s got to happen both from the tool standpoint as well as CIOs being able to take advantage of it,” Cameron said. “And then will come the optimization, which is really where we want to get to.”
The job of CIO is not what it used to be. The days of taking technology orders from the business and just handing over the goods are finished. Beginning is an age of collaboration — working with the business to deliver value to customers, whether internal or external. That is how it is at Voxbone, a Belgian communications-as-a-service company, said Dirk Hermans, vice president of research and development there.
Hermans, who shares the job of CIO with the company’s COO — they both have operational responsibilities — leads a team of about 30 people who develop and design new products and features. He spoke recently to SearchCIO about the changing job of CIO. Some key insights of that conversation: Cloud computing is helping enable the transition, owing to some key characteristics – and a benevolent side effect of the new role is an IT-business partnership that’s overshadowing shadow IT. Following are edited excerpts.
What about cloud is helping shape the job of CIO into a business strategist role?
Dirk Hermans: Well, if you take software as a service, just the fact that you have a pretty standard problem in the industry that’s been solved by a cloud provider, and if you take that pretty standard industry problem as something that would require a lot of maintenance, then suddenly putting that outside of the company has great value, because you have someone specialized in solving only that problem. And oftentimes when you talk about software as a service, it’s really about a very specific part of a business process that has been solved really quite well by a cloud player.
When you talk about infrastructure as a service, there it’s different. To give you an idea, we’re using infrastructure as a service to experiment — to run really rapid prototypes sometimes with customers outside of our core network. And also to scale up quickly — if we need to scale up a specific server that we don’t need to put up a huge amount of capex upfront. Every use case is different — and definitely the infrastructure-as-a-service use cases are quite different from software as a service. Using this stuff always makes us wonder, How can our customers consume our services more easily as a cloud-based solution?
Is the new, evolving job of CIO more rewarding than the old one?
Hermans: Yeah, I think so, and the reason is that at the end of the day you can talk about shadow IT, and what shadow IT is, is basically the business saying, ‘Guys, you’ve had the monopoly for too long — we’re not happy. We’re going to shop elsewhere.’ That in itself is for some people a threat, but the moment that you have a real partnering discussion with the business is the moment that you’re making choices together. And you’re actually giving the business the tools to stand on the shoulders of giants. The moment you embrace that move into the cloud, and you actually encourage that move into the cloud and you give them the right advice, is the moment you could generate a lot of internal satisfaction. I think it’s really key to have that discussion on a partnership level and not on a level of distrust.
Learn what IT chiefs have to say about cloud and the changing job of CIO in this SearchCIO report.