Cloud management software may help organizations bring order to cloud computing chaos — managing and deploying a diversity of cloud services, keeping track of services used for billing purposes and making the best use of cloud infrastructure.
Once they get the green light to buy and then install such software – known as a cloud or multicloud management platform — organizations would do well to draft a deployment plan, advised the Cloud Standards Customer Council. The group, which works on establishing standards for the cloud industry, hosted a webinar on understanding and evaluating cloud management software in late July.
The key deployment question organizations should entertain is whether to buy traditional software, which would reside on their own servers or a prepackaged software-as-service (SaaS) offering. IBM cloud expert Mike Edwards spoke in the webinar about the two offerings. Subscribing to cloud software takes away the burden of having people in-house who “understand how to do that installation, how to install the bits and then run it.” But a SaaS application won’t fit every business situation.
“There’s no one answer,” Edwards said.
William Van Order, a cloud expert at aerospace and defense company Lockheed Martin, laid out other key points organizations should mull over before deploying a multicloud management platform.
Make partnerships. Getting buy-in from other groups in the organization before deploying cloud management software is crucial, Van Order said in the webinar. The software’s capabilities — billing and budgeting and self-service provisioning options among others — reach across the business, so end users, the IT security team and the finance department should all be involved.
Set reasonable objectives. A cross-section of the organization should help set a “common vision and goals” for a multicloud management platform, Van Order said. Because business priorities for the project vary widely – increased agility, more speed in deploying applications, optimizing cloud computing costs, reducing staff size – priorities need to be established at the outset.
The deployment should be rolled out in phases, Van Order said, along with a change management plan to train and get constituencies on board. “This is never going to be just a once-and-done effort,” he said. “Understand what your vision and goals are and establish those use cases to meet those business priorities.”
Understand the multicloud management platform’s role in the cloud ecosystem. The software helps consolidate management for all cloud services in an organization, according to a CSCC report released in July, shortly before the webinar. To achieve the full value, it must integrate with the tools that support function in the cloud infrastructure – service management software, for example, DevOps or financial management tools.
Whether using a SaaS or on-premises system, Van Order implored, organizations need to look at a “complete picture of what the introduction of a cloud management platform is going to do to your overall cloud ecosystem.”
Identify risks and opportunities early. In both the evaluation and deployment process, organizations need to stay abreast of the risks a deployment poses to day-to-day operations — and the opportunities for improvement, according to the report. That way, they can more easily seek out alternatives if things go south.
“Identify things that work for you — what lessons have you learned as you’re doing this phased deployment?” Van Order said. “Be willing to modify your plans when things outcomes shift as well as your business priorities might shift as well.”
As organizations continue to look to cloud services for IT and business uses, their computing environments are becoming vast, entangled webs that span public cloud services and various forms of private cloud. They’re exceedingly difficult to monitor, manage and secure.
“Typically, each of those individual platforms may have a management tool for that one platform, but using different tools for each system you’re using is just painful,” said cloud computing standards advocate Mike Edwards. “It’s not a good place to be.”
Edwards, who develops cloud applications at IBM, spoke in a webinar hosted by the Cloud Standards Customer Council on Wednesday on cloud management platforms, commercial tools that can help organizations navigate hybrid cloud environments.
The webinar aired shortly after the publication of a July report on using and managing hybrid cloud management platforms, which are designed to “simplify the management of resources such as applications and data infrastructure across multiple clouds,” said Karl Scott, a consultant at Satori Consulting.
Scott and Edwards delved into the variety of functions that such tools should perform for organizations seeking to lower costs, improve efficiency and innovate as swiftly as they can.
Integration. Hybrid cloud management platforms must pull together computing systems that live inside and outside the organization, Scott said. First, there are the cloud services themselves — public cloud infrastructure such as Amazon Web Services and Microsoft Azure, internal private cloud and also hosted private cloud, which are run in a provider’s data center on servers dedicated to one customer.
These cloud systems have to also blend in existing enterprise tools – things like incident, configuration and asset management software – Scott said, “because it doesn’t necessarily make sense to rip and replace all systems in the environment.”
General services. These “play a key role to expose hybrid services,” Scott said. Components include a central management portal that can be accessed on a web browser as well as on mobile devices and a service catalog listing all the cloud services that are available.
Analytics and reporting are important, too, Scott said, for “understanding the consumption of cloud services.” For example, the tools can point administrators to services the organization is running – and being charged for – but not using.
Service management. The purpose here, Edwards said, is to simplify administration of all policy-guided IT services. Managing service levels is one key piece.
“It’s essentially about ensuring availability of the services that you’re using and that you’re getting the performance you expect out of those services to meet the service levels that are agreed upon with your users,” he said.
Cloud management platforms also need to monitor the integrated information from all the cloud services and show users what’s happening. And they need to do capacity monitoring, or keep track of computing resources available. That’s critical for determining where certain applications should be run, Edwards said.
“For example, it may well be the case that a private cloud system you have on premises will have definite limits on so many machines, on so much storage and so on,” he said. So a public cloud may be a better choice for a particular workload.
Financial management. Organizations need to track the amount of resources they’re using and spending money on, and the financial management component of hybrid cloud management platforms helps them do that, Edwards said. It does metering, collecting service usage statistics and analyzing usage patterns, allocates costs to the right departments and handles the various invoices coming from cloud providers.
The financial component must also help organizations plan how much cloud computing power they will need in the future, Edwards said. “The ability to forecast the way you’re going to be next week, next month is another key factor.”
Resource management. Resources for cloud computing include virtual machines and object storage, certainly, Edwards said, but the on-demand nature of cloud means organizations need to manage and allocate network, software and database capabilities.
To do that, they’ll need discovery – visibility into what cloud resources are there for the taking. They’ll also need to tag resources so they’re associated with the right applications or departments and automate the provisioning and orchestration of computing resources.
“A given workload may have a number of different resources that need to be pulled together to make it work,” Edwards said. “We must make sure those are orchestrated appropriately so that the whole thing, the whole application, is going to work properly for us.”
Organizations also need to be able to move workloads from cloud to cloud, whether public to public — Azure to Google Cloud Platform, for example — public to private or private to public. A hike in cloud provider pricing may demand such a move, for example, as could the need for faster processing of data, which private clouds can often provide.
Governance. Hybrid cloud usage must be in accordance with an organization’s policies, Edwards said. Policy-based management in a cloud management platform, for example, can prevent the moving of confidential data to the public cloud. Compliance with industry standards and regulations is also critical, Edwards said.
“We need to be looking for appropriate, ideally policy-based governance capabilities built into the cloud management platform, which can get automatically handled as we perform deployments and spin up resources inside the cloud systems,” he said.
Security. Organizations need mechanisms in their hybrid clouds to ensure security, Edwards said, so a cloud management system needs to manage how and when encryption is applied, for example. Role-based access control, or limiting certain usage to certain roles — admin or end user or developer — is important in ensuring that information gets into intended hands only.
“You can never get away from security; it always matters,” Edwards said. “And the challenge with using hybrid cloud services from different providers is to make sure that all the resources that we’re allocating get the appropriate security elements dealt with when they’re deployed, when the resources are spun up and that everything is correct and in place.”
CIOs have some heavy lifting to do.
Machine learning — essentially algorithms that can process massive amounts of information in humanlike ways — offers IT chiefs a wealth of new opportunities, said Ed Featherston, vice president and principal architect at Cloud Technology Partners, a cloud computing consulting outfit in Boston.
“What machine learning does is help them identify patterns that they may not have seen or found before and find potential new business opportunities or new ways to change things in the business,” Featherston said in a video interview published last week. He spoke to SearchCIO at Cloud Expo in New York in June.
Many vendors offer machine learning capabilities — IBM, with its Watson supercomputer, is among the most famous; Amazon, Microsoft and Google all have their own services, and they’re readily available to CIOs. To work their analytics magic, though, they require vast pools of data, presenting a significant challenge, Featherston said: getting data to where the algorithm is.
“If I’m using IBM Watson, for example, and I have 50 PB of data,” he said in the video, “sending that out over the internet: probably not going to be an optimum solution.”
Think for a moment about how big just one petabyte (PB) is. Tech explainer site Lifewire equates it to “over 4,000 digital photos per day, over your entire life.” It could take a typical company years to transfer all of it to the cloud.
Of course, the same tech giants offering machine learning capabilities are also public cloud providers with the power to process the amount of data needed. And they all have ways for companies to get it to them, Featherston said.
Amazon Web Services (AWS), for example, has an appliance called Snowball that’s shipped to a company that wants to move data. Up to 80 terabytes (TB) of data can be transferred onto the device; then it’s physically shipped to an AWS data center. (A terabyte, while no petabyte, is nothing to sneeze at. Lifewire estimates it would take 1,498 CD-ROM discs to hold 1 TB.)
Still, AWS goes much bigger than that. Keeping with the piling-on theme, AWS last year rolled out the Snowmobile, a truck that can hold and ship 100 PB of data.
“They drive up with a tractor trailer full of storage units to your location, tie into your network, load those petabytes of data up onto it, drive the truck back to an Amazon location and load that data up onto the network,” Featherston said.
Other providers are catching up to AWS, the top-selling cloud service. Google’s Google Cloud Platform – a distant third in the public cloud market, behind Microsoft’s Azure — last week released its Google Transfer Appliance, in two sizes. Up to 480 TB of data can be put onto the larger one, and then UPS or FedEx can pick it up and cart it away to Google.
Delivery methods that make it easy for customers to transfer data to cloud providers make sense, Featherston said, because vendors know that having lots of data on hand is key for enabling machine learning capabilities.
“The more information [machine learning] has to work with, and the feedback information it has to work with, the more it can produce usable results,” he said. “So the volume is critical, but the vendors that are offering these algorithms are also offering you ways to get that data there.”
It’s not bad business, either.
What’s known as multicloud IT operations today often involve more than just cloud computing. A company might have data and applications with several cloud providers — on cloud infrastructure provided by Amazon Web Services or Microsoft Azure, on a developer-friendly platform as a service and on an internal private cloud, built on premises.
But unless the company was recently founded — in which case it most likely is all cloud — it probably has at least a portion of its data and software on physical servers. That’s why hybrid cloud, and the larger universe in which it exists, the hybrid IT environment — part cloud resources, part on-premises — is becoming the norm today.
‘A better IT’
Adroitly managing that mix of cloud and on-premises IT operations is key to getting benefits such as greater IT efficiency and lower overhead costs, said Murali Balcha, founder and CTO of Trilio, a data protection service provider in Hopkinton, Mass.
“Essentially, the idea is to leverage the capabilities of various cloud software to implement a better IT for yourself,” Balcha said at the recent OpenStack Summit in Boston, a gathering of users of and contributors to the open source software platform.
Organizations that properly manage a hybrid IT environment, Balcha said, can take advantage of the public cloud when they need it — for moving workloads from on-premises to cloud as needed; shifting applications among different cloud deployments; and dialing up public cloud resources if business demands call for it.
But setting up such an operation is no simple task, Balcha said. On-premises servers and cloud need to be in sync – they must have access to the same data sets. “You need to have this layer where this data access flows between on prem and all the clouds within the hybrid cloud,” he said.
Characteristics of hybrid IT
Balcha detailed what he called four enablers of a hybrid IT environment:
Data capture must be platform-agnostic. Data has to be captured — acquired and stored — and applications have to run in not just one provider’s cloud, but in each one an organization uses, Balcha said.
“We deploy lots of applications in IT, but [most organizations] don’t capture these data sets in a way that is consumed across all the clouds,” Balcha said. “If you can’t consume your data sets that you are deploying on one cloud on a different cloud, that limits what you can do with the hybrid cloud.”
Standardizing on a common platform such as OpenStack is one way to go, he said. That way, on-premises servers and cloud deployments are all running on the same operating system.
Data sets need to be mobile. Organizations have to be able to securely move data from cloud to cloud, Balcha said, so they can “run some applications on the same data set on a different cloud.”
The best way to move data among clouds is by using cloud storage, Balcha said, Amazon’s Simple Storage Service being the most popular. Cloud storage can also be used to access the data wherever it lies.
Applications have to be reorchestrated. For cloud deployments, orchestrating means rearranging processes and components so systems running in far-flung locations are connected. Once applications can be moved among on-premises and cloud deployments, organizations need to reorchestrate them, Balcha said.
For example, an application running in a private cloud built on OpenStack has to be refitted for AWS, but that’s relatively easy, Balcha said, as long as virtual machines and other resource types are in a standard format.
A single pane of glass is needed to manage all clouds. In a hybrid IT environment, Balcha said, all cloud deployments should be managed through one management console on a computer monitor, say, or a mobile device screen.
If an organization has four cloud deployments, “You should not feel that you need to log into all four different clouds,” Balcha said, so single sign-on is necessary. The less complexity exposed to users, the better. “The single pane of glass should hide all the details and provide you one simple interface.”
To find out how organizations today are dealing with multicloud environments, read this SearchCIO report.
Just as a healthy body can’t dodge every bacterial infection that comes its way, so should a sound organization realize it cannot avoid getting hacked. That’s how Michael Chertoff, former secretary of the U.S. Department of Homeland Security and co-founder and executive chairman at the Chertoff Group, explains the reality of today’s threat environment to security professionals.
“Anybody telling you that you are going to avoid ever getting hacked is blowing smoke at you … because you can’t stop getting hacked, but what you can do is manage the risk of getting hacked,” Chertoff told the audience at the recent Cybertech conference in Fairfax, Va.
With the interdependence of the internet, the issue of vulnerability or attack surface is no longer restricted to an organization’s own network, Chertoff said talking about the trends and challenges in cybersecurity. And that vulnerability will only increase as more things become internet-enabled.
Mirai-like malware, for example, uses the internet of things devices to launch distributed denial of service attacks, he said, referring to how IoT is affecting security and privacy.
“By bringing the IoT devices into play, we have not considered the fact that it’s going to be a problem not only for those who own these devices and may find malware coming in from these devices, but for everybody else who will become a victim of these botnets,” he said.
At the same time, ransomware attacks like WannaCry prove that surface area issues are not just a question of zero-day exploits or cutting edge malware; they are often about human failure to take simple steps like installing patches on time, Chertoff said.
Dealing with these threats as a society is of paramount importance, he said, because a failure at one organization can affect multitudes. “The ability to act collectively in order to protect ourselves and our community is an important part of cybersecurity strategy.” People need to be educated on the solutions out there that can help them manage risks in today’s threat environment.
Chertoff circled back to his infection analogy: Just as the human body uses the immune system as a second line of defense, organizations should adopt an equivalent model to their cybersecurity risk management approach, he stressed. They should focus on the attack pathway when securing their networks, because the problem is not just the initial breach, he said. Once the attackers have penetrated the company’s network, they will steal credentials, identify the data that’s going to be stolen and then execute the exfiltration of that data, all resulting in systemic damage to the network – and beyond.
“At each of these stages you have an opportunity to deploy and exercise your immune system to stop and mitigate the damage and that’s when you use a whole set of tools, which I think is a more holistic approach to security,” he said.
When configuring their networks, organizations should consider security measures like identity authorization and role-based access control to determine a user’s access rights, network segmentation to supervise what’s going on within their networks, and privileged user monitoring to monitor behavior that deviates from the normal, he advised.
“In the end what defines your strategy for securing yourself are your policies, governance … your understanding of what the key assets are and then your ability to train people and deploy them with technology to execute the plan,” he said.
NEW YORK — In the digital age, IT projects are often focused on making IT processes more efficient and thus more responsive to business demand — a move to the public cloud to scale up or down as needed, for example, or using agile project management to get innovative applications out the door fast.
Other IT projects have the sole purpose of helping the business run better and more efficiently. CIOs at the Argyle CIO Leadership Forum here on Tuesday swapped examples of using technology to improve business operations — and sharpen the way business does business.
For Kenneth Corriveau, CIO at Omnicom Media Group, implementing collaboration tools such as Slack has made it easier to exchange ideas and information and has “flattened the hierarchical nature of the organization.”
The company, a division of global marketing and corporate communications company Omnicom Group, helps organizations determine where and how to place ads for products and services. It has a youthful workplace, Corriveau said, with approximately 60% of employees not yet 30.
“They’re coming out of school or college with access to tools and information on a whim,” he said. “So how do we provide an environment where they can Google it, find a tool, go out there, download it and use it?”
Corriveau is doing that by providing “guardrails,” so IT can have oversight into what new applications are getting added.
The opposite of that visibility into employees ordering up their own online services — what’s known as shadow IT, or IT operations existing under IT’s radar — is what Barbara Spengler, CIO at Wyndham Destination Network, wanted to shed light on. The company is a division of Wyndham Worldwide, owner of the Wyndham hotel chain, and manages a variety of rental accommodations and timeshares.
“Every department was going off and partnering and signing up for licensing agreements with certain vendors,” Spengler said, “But they got to a point where they realized they needed some IT help,” especially with integrating data with back-end systems.
So she went to the application providers with the aim of putting more control over capabilities and features into business users’ hands, “so that IT isn’t the bottleneck for things.”
Now a handy partnership between IT and business has worked to improve business operations at the company. “We’re working very closely with them and trying to get them to own and manage a lot of the technology themselves.”
When Andrew Stanley received the email from PayPal, he knew immediately that something was amiss. There in the PayPal domain name, “under one of the a’s” was a Turkish accent mark called a cedilla.
“If you looked at it on your laptop monitor, it looked like a little speck of dust,” said Stanley, who is the chief information security officer at Philips, the Amsterdam-based healthcare and consumer lifestyle company.
“I didn’t use anything with PayPal and I said, ‘What is that?’ I happened to put my finger on it and it didn’t move. That’s when the light went on,” Stanley told the audience at the recent MIT CIO Symposium, where he was a guest speaker in a session titled, You were hacked: Now what?
Of course, what caught the expert’s eye, a layman could easily miss. In fact, laymen do easily miss such warning signs on a regular basis. According to a recent study, over 90% of cyberattacks start with a phishing email.
Educating employees on how to detect and prevent phishing attacks continues to be a crucial step in protecting sensitive information, Stanley said. Tabletop exercises simulating online attacks and penetration testing are other good ways to test an organization’s – and their employees’ — cyber incident response capability.
“Penetration testing forces you to be a little more real-time. In certain types of pen tests, they are actually looking at your detection systems to see what they can and can’t pick up,” he said.
He also stressed the need for hiring security intelligence staff.
“That’s one of my highest cost investments,” he said. “We have our tactical or technical intelligence team, which is able to look at trends and different phishing attempts and try to correlate that to a particular attacker. Then we have our strategic intel team that’s trying to figure out the ‘why’.”
Figuring out the “why” is vital, because determining the hackers’ intent before the information walks out of the door is going to help organizations prevent such attacks in the future, Stanley added.
James Lugabihl, director, execution assurance at HR management services firm ADP, and also on the panel at the MIT CIO Symposium, said that fostering a security conscious culture is one of the key strategic pillars of ADP’s security organization. “We try to drive that in every opportunity we can within our brand image.”
He laid out several steps to help drive a security culture: Managing privileged administrator accounts, having proper network segmentation and implementing the right crisis management plan. Organizations need to plan properly and focus on the proper execution of their incident response plan, he added.
“I don’t agree with ‘practice makes perfect’; perfect practice makes perfect. Because if you are doing it wrong in practice, you will continue to do it wrong when it hits the fan,” Lugabihl said.
File this in your folder marked Digital content strategy for the Millennial crowd. (More on the crowd part later.)
In 2015, Scripps Networks Interactive, parent to cable blockbusters HGTV, Food Network and the Travel Channel, launched a new business division: Scripps Lifestyle Studios. Its mission?
According to Vikki Neil, who oversees the division, a big aim was to get to where most companies and advertisers want to be these days: on the social platforms favored by Millennials — Snapchat, Facebook, YouTube, Instagram, etcetera — with the full digital panoply of videos, photos, blogs and articles.
Two years later, the 125-person division has racked up five billion video views and delivered some 5,000 pieces of original content distributed across seven social platforms, which it updates 24 times a day. That’s a 750% growth — without raising headcount, Neil told an audience of digital strategists at the recent Digital Strategy Innovation Summit in New York.
“We basically went in and said, ‘Hey, guess what guys? Everyone has a new job. Starting tomorrow you’re all going to be content creators,” she said.
Along the way, Lifestyle Studios has developed a better sense of how to reach advertising’s new favorite generation. One guiding principle of its digital content strategy: “It has to look authentic. If you create something that is faux, you need to call it faux and make a joke of it,” Neil said.
Another selling point will be familiar to parents of Millennials. “Communal stuff works well — they love a crowd,” a finding reflected in the digital content her division creates for Food Network and HGTV, and on the TV screen, Neil said.
“If you go to the shows, you’ll notice a lot more people now on the TV screens and in the digital content, for sure. You’ll see people doing things with their families, instead of just one person,” she said.
Big holiday gatherings also present opportunities for developing content for Millennials, but with a twist, Neil said. She pointed to a Millennial-focused project her division did for Thanksgiving.
Called Friendsgiving, the digital content targeted an audience that was “not necessarily aiming for the traditional Thanksgiving gathering,” but was interested in having a “communal collaboration” to mark the occasion. The Scripps’ content featured a gathering where everyone brings something, like a potluck, but “more elevated,” Neil said. “Packages around that did well for us and for advertisers.”
Digital content strategy expands
The pursuit of an effective digital content strategy continues apace at Scripps Networks Interactive (SNI). Earlier this month, the company announced the acquisition of online food publication Spoon University, started by millennials Mackenzie Barth and Sarah Adler. The company also expanded its 2015 deal with Snapchat’s Discover platform to include new food and home programming aimed at “millennials and centennials who may not yet be hooked on our premium offerings,” the company said.
In the SNI’s May 23 earnings call, the Lifestyle Studios division was called out as the company’s “one-stop shop for all digital content, leading the way for digital and video integration” by Kenneth W. Lowe, SNI chairman, CEO and president.
“The Lifestyle Studios generated nearly 2.9 billion video views. That’s an increase of about 450% over the first quarter of 2016, really a remarkable achievement and just one example of our determination to expand our reach across all devices,” Lowe crowed.
Bonus tip on digital strategy: Read about how centennials will force companies to rethink online privacy.
Open communication channels are critical to organizations’ cyber-risk management strategies, according to Michael Siegel, principal research scientist at MIT Sloan School of Management. Yet board reporting by CISOs about the risk of cyberattacks is only now becoming a regular practice.
“The understanding of cyber risk and the reporting of cyber risk to the board was perhaps nonexistent, except at the top-tier financial companies,” said Siegel, also the associate director at MIT’s Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity.
As data breaches and ransomware attacks have become regular items in news headlines, however, board demands for more cyber intel is increasing. “Now I’m hearing report quarterly, report monthly. I’m hearing the CISO reporting and working on risk assessment presentations to the board.”
Communicating about the threat of cyberattacks is complicated, Siegel said, because other risks organizations face — the potential of getting hit with lawsuits, say, or sustaining property damage after a natural disaster — are managed in the risk management office, with efforts typically led by a chief risk officer or the CFO.
Those executives and the CISO have different views of cyber-risk management, he said. Take cyber insurance, also known as cyber liability insurance coverage, which can help organizations offset the financial damage of a data breach. About a third of U.S. companies have policies now, according to a PwC report, but the market is growing and is projected to hit $7.5 billion by 2020.
It’s CFOs and CROs who are fueling that interest. CISOs — not so much.
“To the CISO — I’ll overstate this — but cyber insurance really doesn’t mean anything,” Siegel said. “It’s something the CFO does to manage the ultimate risk of the company. To the CISO, that my systems work and that I’m not attacked and that we don’t have downtime — the operational aspect of keeping things running — is the major significance.”
The CISO then is perhaps in a better position to understand what the risk of, say, introducing new technologies in the organization is, he said — highlighting the importance of clear communication between the IT security chief and the CFO in guarding against cyberattacks.
“They have to understand how to speak to each other and make the two things work.”
MIT’s Michael Siegel discusses more about cyber-risk management — including the “inverse ROI” of not investing in cybersecurity — in this SearchCIO interview.
CAMBRIDGE, Mass. — Universal basic income, a monthly stipend given out by a government to help cover its citizens’ basic needs, is getting a lot of attention as advances in automation exceed what even the experts predicted and median wages continue to stagnate. But two researchers at the recent MIT Sloan CIO Symposium said the need for universal basic income (UBI) is premature.
Andrew McAfee and Erik Brynjolfsson, authors of the forthcoming Machine, Platform, Crowd: Harnessing Our Digital Future, said UBI is not a viable solution for the sort of job loss happening in our current economy. “The data are incredibly clear: Month-by-month, as long as we’re not in a recession, we need more hours of work to make the economy go,” said McAfee, principal research scientist and co-director at the MIT Initiative on the Digital Economy.
Indeed, despite the fact that machine learning systems are advancing at a faster pace than expected, machines still can’t do everything, according to the researchers. And when McAfee and Brynjolfsson researched UBI for their previous book, experts explained to them that “the core problems were not so much something that you could write a check for. It was the fact that people wanted to be engaged in their community, wanted to work,” said Brynjolfsson, director of the MIT Initiative on the Digital Economy and an economist.
Regardless, UBI is not an all-or-nothing solution for people suffering from wage stagnation, Brynjolfsson said. Adjustments to public policies on minimum retirement age, family leave, health care and disability support could raise the standard of living while keeping people engaged. And, he continued, these kinds of policy decisions to benefit the citizenry are not unprecedented in our history. The introduction of public education, anti-trust policy changes in the income tax code and social security are examples of policies made for common good, Brynjolfsson said.
“All of which were incredibly controversial at the time, and yet we got past it,” McAfee said.
“And the net effect was having a dynamic capitalistic vibrant economy with lots of competition, lots of innovation but also living standards that were raised for a broad set of people,” Brynjolfsson said.