LAS VEGAS — According to Intel CEO Brian Krzanich, if you want to see the makings of the next data revolution, all you need to do is look up.
“Look up” at drones, that is, Krzanich told the audience during his keynote at the InterDrone conference in Las Vegas.
Drones possess the ability to capture precise data for industries like agriculture, construction and infrastructure inspection, even in the most demanding situations and environments. As such, unmanned aerial vehicles or UAVs (the industry’s preferred term for drones) are one of the most important technologies of the data age, Krzanich said.
Intel has a special interest in the future role of UAVs in business. As the chip maker shifts from a PC-centric to a data-centric company, it sees drones as becoming a critical component in the quest to extract real, actionable value from data.
“But the future of drones is more about what you can do with that data and what that data means, and the insights it has [rather] than the actual flight itself, and that’s an important shift we all need to start thinking about,” Krzanich stressed.
Drone-based data revolution
An important first step in this drone-based data revolution is making the collection of all this new data easy and seamless, he said.
Once the drones collect the data, AI technologies will play an important part in propelling the drone-based data revolution, Krzanich told the audience. Data is transforming every industry and providing opportunities, but it is the application of AI drives new business insights. That is true for UAVs, as well. Otherwise, they are just a very complex, smart and expensive toy, Krzanich warned.
“When you bring those drones together with insights from big data and AI, the whole world will begin to change,” he proclaimed.
Intel Insight platform
Krzanich unveiled the Intel Insight Platform — Intel’s vision for expediting the path from data to insights — during his keynote. The platform, which is optimized for large data sets, is a cloud based system that allows customers to produce and generate data, push it to the cloud and analyze it and generate reports, he explained.
“As we continue to grow this database, just like every other AI engine, it will become smarter, become more capable and it will have more applications,” he touted.
The automation capabilities are integral to the evolution of UAVs, Krzanich said. For example, in the near future drones could become a vital part of disaster response, so making them more automated and intelligent will be very important, he said.
“The future of drones is about making the drones easier to use, more intelligent, [and] driving the capability to the edge, simplifying the workload, automating the workload,” Krzanich said. “The industry needs to think one-touch and then analytics — that’s the real engine that will drive the value out of these devices as it is applied to all of the data that these systems are collecting.”
With cloud computing popularity on the ascent, Ben Chen has to think big. Chen is president of business development at a U.S. branch of China Unicom, a state-owned telecom and the second-largest wireless carrier in the world’s most populous country. I spoke to him at the Gartner Catalyst conference in San Diego earlier this month.
Based in San Jose, Calif., Chen’s division helps U.S. companies moving to China get set up with telecommunications, connecting their facilities abroad with stateside headquarters. And it does the same for Chinese companies building outposts in the U.S.
With the steep and rapid increase in the number of cloud adopters, especially in the U.S., Chen wonders, how relevant will a traditional telecom remain to customers?
“Maybe they will rely on more cloud services, because they have all their content on the cloud, and the cloud can be synchronized, so maybe they won’t need real connectivity between China and their headquarters anymore,” Chen said. “So we have to think about what role we are going to play.”
In China, China Unicom offers mobile and traditional voice and data services “very similar to AT&T,” Chen said. China Unicom, though, offers public cloud services, unlike AT&T and other U.S. telecoms, which have left the cloud market because they couldn’t compete with the likes of Amazon and Microsoft.
But China Unicom also can’t compete with the research-and-development power of providers like Alibaba Cloud, which are pouring money into innovative new technology and features, Chen said. Instead, it needs to find another way to take advantage of the current mass migration to the cloud.
China Unicom’s big differentiator is its vast mobile infrastructure, supporting approximately 300 million mobile customers, Chen said. Many users of mobile devices in China have no landline telephones — just handsets packed with mobile apps and reliant on connections to the internet and public cloud providers. Such a network can be leveraged in the face of accelerating cloud computing popularity, he said.
“That can play a more important role, working with the cloud providers and the users,” Chen said. “This is our value now rather than the traditional phone service and the other services. We should leverage our value to create more value thorough mobile.”
Plugging into the future
The internet of things (IoT) presents another market for growth, Chen said. China Unicom is working with technology vendors on smart homes, myriad smart devices, machines and vehicles; for example, it partnered with Cisco IoT division Jasper on a service to help automakers build connected cars.
Big data is a third area, Chen said. By collecting information on how its hundreds of millions of mobile customers use their devices, China Unicom can determine where service is concentrated and can put in a new cell tower, for example.
Of course, cloud adoption, especially in China, which lags a few years behind the U.S. in taking on new technology, Chen said, isn’t 100%. So carriers like China Unicom that do traditional connecting with dedicated circuits aren’t feeling the heat of rising cloud computing popularity.
“Maybe we have a few years. [Companies are] in migration — not everyone has moved to cloud yet,” Chen said. “But we have to be ready.”
To learn about what IT professionals at Gartner Catalyst said about cloud strategies at their organizations, read this SearchCIO report.
Many buyers, few suppliers — that could characterize the public cloud platform market today. Amazon Web Services, Amazon’s cloud division, and Microsoft Azure hold nearly three-quarters of all revenue in 2017, according to Forrester Research.
But while few big vendors in the software as a service (SaaS) market equals big risks, including being forced to remain customers of one vendor, an August report warns, consolidation in the cloud infrastructure market also presents key benefits. For one, it’s getting cheaper to host IT operations on cloud services. And intense global competition means more options in local markets — for example, Azure is expanding to Africa and Google to South America, and China’s Alibaba is building new data centers worldwide.
Still, organizations that put all of their data and applications in one provider’s public cloud are exposing themselves to enormous risks, the report continued. If the provider experiences an outage, for example, companies could see their websites crash — and business skid to a halt — as hundreds of thousands did following the outage at AWS in February. Or a security breach at a provider could put their customer data and intellectual property into malicious hands.
CIOs can build hedges against vendor lock-in by adopting a multicloud approach, the report advised — spreading IT and business operations across several public cloud services and in private cloud deployments, where they can retain more control. That way, the fate of the business isn’t tied to any one public cloud provider.
CIOs can start by using multiple public cloud vendors — at least two — “and shift business from one to the other on a workload-by-workload basis,” the report read. For example, different platforms should be used for primary and backup storage.
As part of any cloud strategy, IT leaders should also include private cloud to lower the risk of being too dependent on any one public cloud platform. Though it costs more to build and operate a cloud on premises, “you’ll retain more control over when and how you upgrade and pay for it,” according to the report.
In an interview, Forrester analyst Andrew Bartels, the lead author of the report, said CIOs often start their cloud journeys by adopting private clouds first because of concerns about security and control – they don’t want any data out of their sights — and then move more and more pieces of their IT infrastructure to the public cloud as they get comfortable with it.
At that point, many CIOs will run up to, say, 70% of their operations in the public cloud, because of the economics of the cloud, but still keep 30% in a private cloud “as a hedge,” Bartels said.
So if the cloud vendor “starts getting greedy,” CIOs can pull data back to their own environment where they’re not exposed to risk, he said.
No easy move — yet
The strategy works because moving from public to private and back can be easier and cheaper than moving from one public cloud service to another, the report read, and none has yet made moves to change that.
Bartels said that may be because it’s not in their interest to allow customers to easily switch to another provider. “It may also be because they’re not getting client demand for it,” he said.
That may change, Bartels said, as companies continue building multicloud environments. If any vendor does give the option for cloud-to-cloud migration, brisk competition in the public cloud platform market means other vendors will quickly follow suit. It won’t happen today or tomorrow, though.
“We’re not at that point on the cloud platform side of vendors seeking to get the same degree of lock-in that you see on the SaaS application vendor side,” Bartels said.
To learn about the risk of vendor lock-in with prepackaged cloud applications, read this SearchCIO report.
With cyberattacks increasing in sophistication and data privacy laws such as the European General Data Protection Regulation set to go into effect in 2018, organizations should expect to see healthy worldwide growth in the information security market, according to a Gartner report released last week. Security is also top of mind for technology companies these days. This week, Google revealed the technical details of its custom security chip Titan, which is designed to better secure the hardware behind its cloud services.
In other news, online retail giant Amazon’s said it expects to close the $13.7 billion Whole Foods deal next week, while its competitor Walmart announced it is teaming up with Google as it plans to dive into the voice-assisted shopping realm.
Here are the headlines in this week’s SearchCIO news roundup in more detail.
Information security market to reach $86.4B in 2017. IT research outfit Gartner foresees worldwide spending on information security products and services will rise by 7% this year, with spending expected to reach $93 billion in 2018. The growth is spurred by the string of recent data breaches. “Rising awareness among CEOs and boards of directors about the business impact of security incidents and an evolving regulatory landscape have led to continued spending on security products and services,” Sid Deshpande, a research analyst at Gartner, said in a statement last week. Security services such as IT outsourcing, consulting and implementation services, will continue to be the fastest growing segment, according to the report. The European General Data Protection Regulation, a new framework for European data protection laws that goes into effect May 2018, is another driving force behind the growth of the information security market, Gartner believes. It is expected to drive 65% of data loss prevention buying decisions through 2018, according to the report.
Competition among tech titans continues. On Wednesday, the U.S. Federal Trade Commission approved Amazon’s $13.7 billion bid to purchase Whole Foods Market, a national chain of natural and organic food grocery stores. The FTC nod will help Amazon secure a larger foothold in the $700 billion U.S. grocery market industry, an area that is currently dominated by Walmart. The very same day, Walmart revealed its plans for voice-activated shopping — a space dominated by Amazon’s Alexa-powered Echo — through a partnership with Google. Starting in September, Walmart will begin offering items for voice shopping via Google Assistant, the retail company said. “We will continue to focus on creating new opportunities to simplify people’s lives and help them shop in ways they’ve not yet imagined,” Marc Lore, president and chief executive of Walmart U.S. e-commerce, said in a blog post.
Google ‘Titan’s’ security in the cloud. This week, internet search giant Google revealed technical details of its Titan security chip, designed to better secure the machines that power its cloud services. The chip, unveiled in March at Google Cloud Next, establishes a root of trust or a security protocol that validates the integrity of a machine’s hardware and software when booting and prevents the machine from doing so if an issue is detected, according to the Mountain View, Calif.-based company. “Google designed Titan’s hardware logic in-house to reduce the chances of hardware backdoors,” Google Cloud Platform engineers said in a blog post this week.
Cloud management software may help organizations bring order to cloud computing chaos — managing and deploying a diversity of cloud services, keeping track of services used for billing purposes and making the best use of cloud infrastructure.
Once they get the green light to buy and then install such software – known as a cloud or multicloud management platform — organizations would do well to draft a deployment plan, advised the Cloud Standards Customer Council. The group, which works on establishing standards for the cloud industry, hosted a webinar on understanding and evaluating cloud management software in late July.
The key deployment question organizations should entertain is whether to buy traditional software, which would reside on their own servers or a prepackaged software-as-service (SaaS) offering. IBM cloud expert Mike Edwards spoke in the webinar about the two offerings. Subscribing to cloud software takes away the burden of having people in-house who “understand how to do that installation, how to install the bits and then run it.” But a SaaS application won’t fit every business situation.
“There’s no one answer,” Edwards said.
William Van Order, a cloud expert at aerospace and defense company Lockheed Martin, laid out other key points organizations should mull over before deploying a multicloud management platform.
Make partnerships. Getting buy-in from other groups in the organization before deploying cloud management software is crucial, Van Order said in the webinar. The software’s capabilities — billing and budgeting and self-service provisioning options among others — reach across the business, so end users, the IT security team and the finance department should all be involved.
Set reasonable objectives. A cross-section of the organization should help set a “common vision and goals” for a multicloud management platform, Van Order said. Because business priorities for the project vary widely – increased agility, more speed in deploying applications, optimizing cloud computing costs, reducing staff size – priorities need to be established at the outset.
The deployment should be rolled out in phases, Van Order said, along with a change management plan to train and get constituencies on board. “This is never going to be just a once-and-done effort,” he said. “Understand what your vision and goals are and establish those use cases to meet those business priorities.”
Understand the multicloud management platform’s role in the cloud ecosystem. The software helps consolidate management for all cloud services in an organization, according to a CSCC report released in July, shortly before the webinar. To achieve the full value, it must integrate with the tools that support function in the cloud infrastructure – service management software, for example, DevOps or financial management tools.
Whether using a SaaS or on-premises system, Van Order implored, organizations need to look at a “complete picture of what the introduction of a cloud management platform is going to do to your overall cloud ecosystem.”
Identify risks and opportunities early. In both the evaluation and deployment process, organizations need to stay abreast of the risks a deployment poses to day-to-day operations — and the opportunities for improvement, according to the report. That way, they can more easily seek out alternatives if things go south.
“Identify things that work for you — what lessons have you learned as you’re doing this phased deployment?” Van Order said. “Be willing to modify your plans when things outcomes shift as well as your business priorities might shift as well.”
As organizations continue to look to cloud services for IT and business uses, their computing environments are becoming vast, entangled webs that span public cloud services and various forms of private cloud. They’re exceedingly difficult to monitor, manage and secure.
“Typically, each of those individual platforms may have a management tool for that one platform, but using different tools for each system you’re using is just painful,” said cloud computing standards advocate Mike Edwards. “It’s not a good place to be.”
Edwards, who develops cloud applications at IBM, spoke in a webinar hosted by the Cloud Standards Customer Council on Wednesday on cloud management platforms, commercial tools that can help organizations navigate hybrid cloud environments.
The webinar aired shortly after the publication of a July report on using and managing hybrid cloud management platforms, which are designed to “simplify the management of resources such as applications and data infrastructure across multiple clouds,” said Karl Scott, a consultant at Satori Consulting.
Scott and Edwards delved into the variety of functions that such tools should perform for organizations seeking to lower costs, improve efficiency and innovate as swiftly as they can.
Integration. Hybrid cloud management platforms must pull together computing systems that live inside and outside the organization, Scott said. First, there are the cloud services themselves — public cloud infrastructure such as Amazon Web Services and Microsoft Azure, internal private cloud and also hosted private cloud, which are run in a provider’s data center on servers dedicated to one customer.
These cloud systems have to also blend in existing enterprise tools – things like incident, configuration and asset management software – Scott said, “because it doesn’t necessarily make sense to rip and replace all systems in the environment.”
General services. These “play a key role to expose hybrid services,” Scott said. Components include a central management portal that can be accessed on a web browser as well as on mobile devices and a service catalog listing all the cloud services that are available.
Analytics and reporting are important, too, Scott said, for “understanding the consumption of cloud services.” For example, the tools can point administrators to services the organization is running – and being charged for – but not using.
Service management. The purpose here, Edwards said, is to simplify administration of all policy-guided IT services. Managing service levels is one key piece.
“It’s essentially about ensuring availability of the services that you’re using and that you’re getting the performance you expect out of those services to meet the service levels that are agreed upon with your users,” he said.
Cloud management platforms also need to monitor the integrated information from all the cloud services and show users what’s happening. And they need to do capacity monitoring, or keep track of computing resources available. That’s critical for determining where certain applications should be run, Edwards said.
“For example, it may well be the case that a private cloud system you have on premises will have definite limits on so many machines, on so much storage and so on,” he said. So a public cloud may be a better choice for a particular workload.
Financial management. Organizations need to track the amount of resources they’re using and spending money on, and the financial management component of hybrid cloud management platforms helps them do that, Edwards said. It does metering, collecting service usage statistics and analyzing usage patterns, allocates costs to the right departments and handles the various invoices coming from cloud providers.
The financial component must also help organizations plan how much cloud computing power they will need in the future, Edwards said. “The ability to forecast the way you’re going to be next week, next month is another key factor.”
Resource management. Resources for cloud computing include virtual machines and object storage, certainly, Edwards said, but the on-demand nature of cloud means organizations need to manage and allocate network, software and database capabilities.
To do that, they’ll need discovery – visibility into what cloud resources are there for the taking. They’ll also need to tag resources so they’re associated with the right applications or departments and automate the provisioning and orchestration of computing resources.
“A given workload may have a number of different resources that need to be pulled together to make it work,” Edwards said. “We must make sure those are orchestrated appropriately so that the whole thing, the whole application, is going to work properly for us.”
Organizations also need to be able to move workloads from cloud to cloud, whether public to public — Azure to Google Cloud Platform, for example — public to private or private to public. A hike in cloud provider pricing may demand such a move, for example, as could the need for faster processing of data, which private clouds can often provide.
Governance. Hybrid cloud usage must be in accordance with an organization’s policies, Edwards said. Policy-based management in a cloud management platform, for example, can prevent the moving of confidential data to the public cloud. Compliance with industry standards and regulations is also critical, Edwards said.
“We need to be looking for appropriate, ideally policy-based governance capabilities built into the cloud management platform, which can get automatically handled as we perform deployments and spin up resources inside the cloud systems,” he said.
Security. Organizations need mechanisms in their hybrid clouds to ensure security, Edwards said, so a cloud management system needs to manage how and when encryption is applied, for example. Role-based access control, or limiting certain usage to certain roles — admin or end user or developer — is important in ensuring that information gets into intended hands only.
“You can never get away from security; it always matters,” Edwards said. “And the challenge with using hybrid cloud services from different providers is to make sure that all the resources that we’re allocating get the appropriate security elements dealt with when they’re deployed, when the resources are spun up and that everything is correct and in place.”
CIOs have some heavy lifting to do.
Machine learning — essentially algorithms that can process massive amounts of information in humanlike ways — offers IT chiefs a wealth of new opportunities, said Ed Featherston, vice president and principal architect at Cloud Technology Partners, a cloud computing consulting outfit in Boston.
“What machine learning does is help them identify patterns that they may not have seen or found before and find potential new business opportunities or new ways to change things in the business,” Featherston said in a video interview published last week. He spoke to SearchCIO at Cloud Expo in New York in June.
Many vendors offer machine learning capabilities — IBM, with its Watson supercomputer, is among the most famous; Amazon, Microsoft and Google all have their own services, and they’re readily available to CIOs. To work their analytics magic, though, they require vast pools of data, presenting a significant challenge, Featherston said: getting data to where the algorithm is.
“If I’m using IBM Watson, for example, and I have 50 PB of data,” he said in the video, “sending that out over the internet: probably not going to be an optimum solution.”
Think for a moment about how big just one petabyte (PB) is. Tech explainer site Lifewire equates it to “over 4,000 digital photos per day, over your entire life.” It could take a typical company years to transfer all of it to the cloud.
Of course, the same tech giants offering machine learning capabilities are also public cloud providers with the power to process the amount of data needed. And they all have ways for companies to get it to them, Featherston said.
Amazon Web Services (AWS), for example, has an appliance called Snowball that’s shipped to a company that wants to move data. Up to 80 terabytes (TB) of data can be transferred onto the device; then it’s physically shipped to an AWS data center. (A terabyte, while no petabyte, is nothing to sneeze at. Lifewire estimates it would take 1,498 CD-ROM discs to hold 1 TB.)
Still, AWS goes much bigger than that. Keeping with the piling-on theme, AWS last year rolled out the Snowmobile, a truck that can hold and ship 100 PB of data.
“They drive up with a tractor trailer full of storage units to your location, tie into your network, load those petabytes of data up onto it, drive the truck back to an Amazon location and load that data up onto the network,” Featherston said.
Other providers are catching up to AWS, the top-selling cloud service. Google’s Google Cloud Platform – a distant third in the public cloud market, behind Microsoft’s Azure — last week released its Google Transfer Appliance, in two sizes. Up to 480 TB of data can be put onto the larger one, and then UPS or FedEx can pick it up and cart it away to Google.
Delivery methods that make it easy for customers to transfer data to cloud providers make sense, Featherston said, because vendors know that having lots of data on hand is key for enabling machine learning capabilities.
“The more information [machine learning] has to work with, and the feedback information it has to work with, the more it can produce usable results,” he said. “So the volume is critical, but the vendors that are offering these algorithms are also offering you ways to get that data there.”
It’s not bad business, either.
What’s known as multicloud IT operations today often involve more than just cloud computing. A company might have data and applications with several cloud providers — on cloud infrastructure provided by Amazon Web Services or Microsoft Azure, on a developer-friendly platform as a service and on an internal private cloud, built on premises.
But unless the company was recently founded — in which case it most likely is all cloud — it probably has at least a portion of its data and software on physical servers. That’s why hybrid cloud, and the larger universe in which it exists, the hybrid IT environment — part cloud resources, part on-premises — is becoming the norm today.
‘A better IT’
Adroitly managing that mix of cloud and on-premises IT operations is key to getting benefits such as greater IT efficiency and lower overhead costs, said Murali Balcha, founder and CTO of Trilio, a data protection service provider in Hopkinton, Mass.
“Essentially, the idea is to leverage the capabilities of various cloud software to implement a better IT for yourself,” Balcha said at the recent OpenStack Summit in Boston, a gathering of users of and contributors to the open source software platform.
Organizations that properly manage a hybrid IT environment, Balcha said, can take advantage of the public cloud when they need it — for moving workloads from on-premises to cloud as needed; shifting applications among different cloud deployments; and dialing up public cloud resources if business demands call for it.
But setting up such an operation is no simple task, Balcha said. On-premises servers and cloud need to be in sync – they must have access to the same data sets. “You need to have this layer where this data access flows between on prem and all the clouds within the hybrid cloud,” he said.
Characteristics of hybrid IT
Balcha detailed what he called four enablers of a hybrid IT environment:
Data capture must be platform-agnostic. Data has to be captured — acquired and stored — and applications have to run in not just one provider’s cloud, but in each one an organization uses, Balcha said.
“We deploy lots of applications in IT, but [most organizations] don’t capture these data sets in a way that is consumed across all the clouds,” Balcha said. “If you can’t consume your data sets that you are deploying on one cloud on a different cloud, that limits what you can do with the hybrid cloud.”
Standardizing on a common platform such as OpenStack is one way to go, he said. That way, on-premises servers and cloud deployments are all running on the same operating system.
Data sets need to be mobile. Organizations have to be able to securely move data from cloud to cloud, Balcha said, so they can “run some applications on the same data set on a different cloud.”
The best way to move data among clouds is by using cloud storage, Balcha said, Amazon’s Simple Storage Service being the most popular. Cloud storage can also be used to access the data wherever it lies.
Applications have to be reorchestrated. For cloud deployments, orchestrating means rearranging processes and components so systems running in far-flung locations are connected. Once applications can be moved among on-premises and cloud deployments, organizations need to reorchestrate them, Balcha said.
For example, an application running in a private cloud built on OpenStack has to be refitted for AWS, but that’s relatively easy, Balcha said, as long as virtual machines and other resource types are in a standard format.
A single pane of glass is needed to manage all clouds. In a hybrid IT environment, Balcha said, all cloud deployments should be managed through one management console on a computer monitor, say, or a mobile device screen.
If an organization has four cloud deployments, “You should not feel that you need to log into all four different clouds,” Balcha said, so single sign-on is necessary. The less complexity exposed to users, the better. “The single pane of glass should hide all the details and provide you one simple interface.”
To find out how organizations today are dealing with multicloud environments, read this SearchCIO report.
Just as a healthy body can’t dodge every bacterial infection that comes its way, so should a sound organization realize it cannot avoid getting hacked. That’s how Michael Chertoff, former secretary of the U.S. Department of Homeland Security and co-founder and executive chairman at the Chertoff Group, explains the reality of today’s threat environment to security professionals.
“Anybody telling you that you are going to avoid ever getting hacked is blowing smoke at you … because you can’t stop getting hacked, but what you can do is manage the risk of getting hacked,” Chertoff told the audience at the recent Cybertech conference in Fairfax, Va.
With the interdependence of the internet, the issue of vulnerability or attack surface is no longer restricted to an organization’s own network, Chertoff said talking about the trends and challenges in cybersecurity. And that vulnerability will only increase as more things become internet-enabled.
Mirai-like malware, for example, uses the internet of things devices to launch distributed denial of service attacks, he said, referring to how IoT is affecting security and privacy.
“By bringing the IoT devices into play, we have not considered the fact that it’s going to be a problem not only for those who own these devices and may find malware coming in from these devices, but for everybody else who will become a victim of these botnets,” he said.
At the same time, ransomware attacks like WannaCry prove that surface area issues are not just a question of zero-day exploits or cutting edge malware; they are often about human failure to take simple steps like installing patches on time, Chertoff said.
Dealing with these threats as a society is of paramount importance, he said, because a failure at one organization can affect multitudes. “The ability to act collectively in order to protect ourselves and our community is an important part of cybersecurity strategy.” People need to be educated on the solutions out there that can help them manage risks in today’s threat environment.
Chertoff circled back to his infection analogy: Just as the human body uses the immune system as a second line of defense, organizations should adopt an equivalent model to their cybersecurity risk management approach, he stressed. They should focus on the attack pathway when securing their networks, because the problem is not just the initial breach, he said. Once the attackers have penetrated the company’s network, they will steal credentials, identify the data that’s going to be stolen and then execute the exfiltration of that data, all resulting in systemic damage to the network – and beyond.
“At each of these stages you have an opportunity to deploy and exercise your immune system to stop and mitigate the damage and that’s when you use a whole set of tools, which I think is a more holistic approach to security,” he said.
When configuring their networks, organizations should consider security measures like identity authorization and role-based access control to determine a user’s access rights, network segmentation to supervise what’s going on within their networks, and privileged user monitoring to monitor behavior that deviates from the normal, he advised.
“In the end what defines your strategy for securing yourself are your policies, governance … your understanding of what the key assets are and then your ability to train people and deploy them with technology to execute the plan,” he said.
NEW YORK — In the digital age, IT projects are often focused on making IT processes more efficient and thus more responsive to business demand — a move to the public cloud to scale up or down as needed, for example, or using agile project management to get innovative applications out the door fast.
Other IT projects have the sole purpose of helping the business run better and more efficiently. CIOs at the Argyle CIO Leadership Forum here on Tuesday swapped examples of using technology to improve business operations — and sharpen the way business does business.
For Kenneth Corriveau, CIO at Omnicom Media Group, implementing collaboration tools such as Slack has made it easier to exchange ideas and information and has “flattened the hierarchical nature of the organization.”
The company, a division of global marketing and corporate communications company Omnicom Group, helps organizations determine where and how to place ads for products and services. It has a youthful workplace, Corriveau said, with approximately 60% of employees not yet 30.
“They’re coming out of school or college with access to tools and information on a whim,” he said. “So how do we provide an environment where they can Google it, find a tool, go out there, download it and use it?”
Corriveau is doing that by providing “guardrails,” so IT can have oversight into what new applications are getting added.
The opposite of that visibility into employees ordering up their own online services — what’s known as shadow IT, or IT operations existing under IT’s radar — is what Barbara Spengler, CIO at Wyndham Destination Network, wanted to shed light on. The company is a division of Wyndham Worldwide, owner of the Wyndham hotel chain, and manages a variety of rental accommodations and timeshares.
“Every department was going off and partnering and signing up for licensing agreements with certain vendors,” Spengler said, “But they got to a point where they realized they needed some IT help,” especially with integrating data with back-end systems.
So she went to the application providers with the aim of putting more control over capabilities and features into business users’ hands, “so that IT isn’t the bottleneck for things.”
Now a handy partnership between IT and business has worked to improve business operations at the company. “We’re working very closely with them and trying to get them to own and manage a lot of the technology themselves.”