TotalCIO


February 21, 2017  2:22 PM

IBM’s Rossi on AI ethics: ‘Start small and widen the scope’

Linda Tucci Linda Tucci Profile: Linda Tucci

Anyone who has been following the rapid development of artificial intelligence systems by the military-industrial-academic complex would probably agree that coming up with a code of AI ethics is a prudent move by us humans. But who gets to decide what constitutes ethical AI, and how do those tenets actually get embedded in AI systems?

“I prefer to start small and widen the scope,” said Francesca Rossi, a professor at the University of Padova in Italy and research scientist at the IBM Thomas J. Watson Research Center.

Rossi was among the technology leaders who met recently in Asilomar, Calif., to hammer out guidelines for the development of safe AI systems that are of general benefit to humankind. The work actually began two years prior when scientists and thinkers from other disciplines gathered in Puerto Rico to take on the topic of AI ethics, she explained in a phone call after the Asilomar event.

“After two years we thought we were ready to spell out a little bit more what it means to develop AI in the most beneficial way,” Rossi said.

The upshot was the Asilomar AI Principles, 18 tenets in all and focused on three areas: AI research issues, AI ethics and values, and long-term issues pertaining to the impact of advanced AI systems on the history of life on Earth.

AI ethics, one scenario at a time

Of particular interest to Rossi is Principle 10: AI systems should be designed and operated so that their goals and behaviors can be assured to align with human values throughout their operation.

Living up to that laudable principle is not without its own ethical challenges, she conceded.

“There is no one set of ethical ideas for everybody, of course. Behaving according to some ethical principles may mean many things depending on the culture, professional codes, social norms,” Rossi said.

The way forward for AI ethics, she believes, is grounded in specific scenarios, where the human values that we want to see embedded in machines are well understood.

An example, she said, is the work IBM and other companies, together with domain experts, are doing on decision-making support systems in healthcare. The approach is to go from medical specialty to medical specialty, scenario by scenario. “We understand what we expect from that doctor, and we expect the AI system to behave at least as ethically as a doctor, if not better,” Rossi said.

“Once you understand what values you have to put in, in those specific scenarios, the second thing then is you have to figure out how to build the AI systems so they support value alignment. And then you have to make sure that that system behaves according to those values,” she said.  “There is a lot of research and work that has and is being done by scientists who understand how to do that,” including work by Rossi.

rossi

‘Real human moral judgment uses the whole brain’

In “Embedding Ethical Principles in collective Decision Support Systems,” a paper published last year in the proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Rossi and colleagues make it clear that building safety constraints, ethical principles and moral values in AI systems is far from settled science.

Rossi et al., lay out three approaches — each corresponding to three major schools of Western moral thought — that have served as “the starting point for nearly all discussions of machine ethics”:

  • Deontological (associated with Immanuel Kant). This approach regards morality as a system of rights and duties; it focuses on categories of actions — deeming them “permissible, impermissible or obligatory based on a set of explicit rules.”
  • Consequentialist (associated with Jeremy Bentham and John Stuart Mill). This approach aims to “produce the best aggregate consequences (minimizing costs and maximizing benefits) according to a pre-specified value function.”
  • Virtue-or character-based approach (associated with Aristotle). This approach “regards ethical behavior as the product of an acquired set of behavioral dispositions.” These dispositions, the authors state, cannot be adequately summarized as either “a set of deontological rules” or as a “commitment to maximizing good consequences.”

Indeed, each of the approaches has well-known limitations, the authors note. Deontological principles are easy to implement “but may be rigid.” Consequentialist principles call for complex calculations “that may be faulty.” Virtue is opaque and entails extensive training “with an unknown teaching criterion.”

But there’s a bigger problem with all three approaches, namely: “… implementing them may depend on solving daunting, general computation problems that have not been solved and may not be solved for some time.”

The authors illustrate just how daunting the task is by analyzing what it would take to program a machine to understand the moral precepts of don’t lie and don’t kill. Inculcating moral judgment in a machine, they argue, will likely require a hierarchical decision system implemented within an agent or perhaps across agents that incorporates all three approaches. “Real human moral judgment uses the whole brain,” the authors state.

My opinion? Without knowing what the whole brain actually does — and we don’t —  it will be very difficult to embed AI ethics in a machine, even if we know what those ethics should be.

The paper, by Rossi, Joshua Greene of Harvard University, John Tasioulas of King’s College London, Kristen Brent Venable of Tulane University and Brian Williams of MIT, can be found here.

Email Linda Tucci, senior executive editor, or find her on Twitter @ltucci.

February 20, 2017  4:05 PM

Smart city platform: A work in progress

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski

One of the head scratchers city CIOs are trying to work through is what the smart city platform should look like. To flesh out their requirements for an analytical data platform, 120 members of the public and private sector, including representatives from 18 cities, five federal agencies and two countries, recently gathered in Kansas City to identify the kinds of data typically collected to assess city performance, the gaps that exist in the data sets and how the data should be measured.

“Right now, if you go to any smart city conference, and you’re going to find as many definitions of a smart city as there are attendees,” said Bob Bennet, chief innovation officer for Kansas City. “What we’re trying to do is put some structure to it.”

The event was part of the Global City Teams Challenge, developed by the National Institute of Standards and Technology (NIST) to facilitate collaboration between cities and to develop a smart cities blueprint of best practices and technology standards. NIST is co-hosting a series of national workshops that focus on different facets of a smart city; the event in Kansas City, one of the most connected cities in the United States, is the second of these.

The goal of the workshop was to pin down a common national data language for use by city governments so that the data collected by one city — be it water, electricity, traffic and so on — could be used by others. For example, the small group discussion Bennet participated in focused on public health. “We came to the conclusion that the Centers for Disease Control [and Prevention] does a good job of compiling data at the county level,” he said. “We didn’t need to reinvent the wheel.”

Bennet said the information gathered at the workshop will be turned into a white paper highlighting the 80 to 90 data sets cities rely on most and how they’ve collectively agreed the data should be measured. The paper is due to NIST in the spring and will be shared with the vendor community.

How soon NIST-sponsored events, like the one in Kansas City, will result in standards is anyone’s guess. Indeed some cities, like Boston, have warned sister smart cities against rushing to anoint a “smart city platform” before all the technical standards — not just those for data collection — are worked out.

“It’s too early for platforms,”  planners stated in the  Boston Smart City Playbook, an open letter to the smart city community.

“We don’t know what kinds of sensors we’ll use in Boston over the next 10 years, who will build them, what technical standards they’ll adhere to, and where they’ll go. … As a city, we’re not in the business of making bets on what technology standards will prevail. (It’s why we’re working with the National Institute for Standards and Technology as part of our work.) Until those standards are clear and until we have a better idea of the technical landscape, we don’t need or want a ‘smart city platform'”


February 17, 2017  11:44 AM

New security threat: Custom applications in the cloud

Jason Sparapani Jason Sparapani Profile: Jason Sparapani

The average organization today develops 464 custom applications — some running functions critical to the business — and puts a lot of the software in the public cloud. IT security is in the dark about nearly two-thirds of these applications, according to new research released this week from nonprofit Cloud Security Alliance (CSA) and security software vendor Skyhigh.

It’s yet another worry for CIOs, who have been trying to work more closely with their business colleagues to stop the spread of shadow IT. Custom applications developed outside the purview of IT and run in the cloud are of particular concern. CSA chief executive Jim Reavis said such software is especially vulnerable to cyberattacks because it is often built by people who “lack the tools and expertise to protect applications they develop and deploy in the public cloud.”

And if there is a breach? CIOs could get the ax. The report also found that 29% of CIOs would get fired if a cyberattack involved core custom applications in the cloud — even if data isn’t lost forever.

CSA and Skyhigh surveyed 314 IT professionals from major industries worldwide about custom applications — software their organizations build themselves. The pair released the report at the RSA Conference, a cybersecurity convention held this week in San Francisco.

Mission: Critical

According to the report, IT security personnel are aware of just 38% of custom applications, which more organizations are relying on for “business-critical” operations — that is, they could bring business to a halt if they’re not functioning. Today, 73% of organizations run such essential operations in custom applications.

Increasingly, these custom applications are running on infrastructure from cloud providers such as Amazon Web Services, Microsoft Azure and Google Cloud Platform. The report found that 46% of business-critical applications are running in the public cloud or in the combination of public and internally run private cloud known as hybrid.

Cloud computing, the report said, makes it easier for departments to build and deploy custom applications without IT security’s involvement.

The survey found that while 63% organizations see public cloud providers as offering better security than their own data centers, concern persists about the security of custom applications in the cloud — with 32% “moderately concerned” and another 32% “very concerned.”

Insecurity complex

Data in the applications is exposed to threats “independent of the platform,” the report said. So accounts could be hacked through phishing, for example, or sensitive data could be uploaded to the cloud or downloaded to a device owned by an employee or other person.

The shift toward cloud infrastructure will continue, the report said. While 61% of application workloads are in data centers today, fewer than half, or 46%, will stay there in the next year.

“This rapid shift is partially due to new applications that are deployed in the public cloud, and because enterprises expect to migrate 20.7% of their existing applications running in data centers to the public cloud during this time,” the report said.

Who’s to blame?

If a cybersecurity breach brings down custom applications in the cloud, the CIO is not first in the firing line. Half of survey respondents indicated IT security personnel would get fired, 32% said operations folks responsible for monitoring the cloud infrastructure would get canned and 22% pointed to the developers who built the applications.

But the research also found that developers and IT security professionals would most likely take responsibility for a breach.

“Perhaps this indicates that at some level, developers feel responsible for the security of custom applications and could therefore be motivated to work more closely to IT security colleagues to secure these applications,” the report said.

CIOs should do all they can to nurture this sense of responsibility by making sure developers and IT security colleagues are working hand in glove.


February 10, 2017  1:02 PM

Accenture: AI is the new UI

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski

Artificial intelligence will be the new user interface for internal and customer-facing applications — not eventually, but soon, according to Accenture. In its flagship technology report Technology Vision 2017, “AI is the new UI” is one of five trends Accenture predicts will disrupt businesses in the next three to five years.

The overall theme of the report recommends a shift in software design philosophy from programming for the masses to programming for the individual. “The standard way people built applications 20 years ago was you had one interface to serve everybody,” said Michael Biltz, managing director of the Accenture Technology Vision at the Accenture Technology Labs in San Jose. “But as technology becomes more and more sophisticated with artificial intelligence, personalization, different interfaces, data from everywhere … you have the ability to design technology that’s going to specifically adapt to people.”

To get started on the “AI is the new UI” trend, Biltz said CIOs should focus on applications that need to change the fastest or, put another way, that are closely aligned with business goals. That will likely mean customer-facing applications will be touched first, but Biltz believes this will shift quickly to internal apps, where artificial intelligence can make employees more effective and more efficient.

Although AI remains a catch-all term today, Biltz encourages CIOs to consider technologies that make “an interaction or an interface more human or natural, so think voice recognition and natural language processing,” he said.

Accenture’s tech roadmap

Accenture’s 2017 report is based on a survey conducted with 5,000 business executives as well as interviews with internal experts, startups and corporate partners.

In addition to “AI is the new UI,” Accenture’s 2017 trends are as follows:

  • Design for humans: Technology should adapt to humans, not the other way around.
  • Ecosystems as macrocosms: Organizations are connecting with third-party companies to offer new services and attract new customers. But choose wisely: Partnerships forged today will affect what organizations will look like — and how customers will see them — in 10 years.
  • Workforce marketplace: Use platforms to find the necessary skills for a project’s success on demand; teams will be assembled and disassembled as needed.
  • The uncharted: What digital business will look like in three to five years is still unknown; early adopters will have opportunities to define it.

The trends are meant to be a realistic roadmap companies could — and Accenture believes should — accomplish in the next three to five years, according to Biltz.

“There’s a reason why something like quantum computing is not on the list. It’s not because we don’t think it’s important, and it’s not because we don’t think it’s eventually going to be big,” he said. “Rather, for most companies, they’re going to look at that technology and say, ‘Yeah, there’s not a lot for me to do right now.'”

Taken together, the five trends underscore an important directive for CIOs: Employee, customer and corporate goals will become increasingly intertwined going forward. Conversations will veer away from technology functionality toward practical use.

“It’s going to become more a question of who will be using the technology, what are their goals and how do we create technology that’s going to be able to help them reach their goals,” Biltz said.

And the iterative style of work that’s infiltrating IT departments now will become the norm. “How we act and interact with each other is continuously changing all of the time, and if you’re not iterative, no matter how smart you are or how good your initial design is, it’s to become obsolete as the world changes,” he said.


February 6, 2017  3:31 PM

Reporting lines for the CDO role: Context counts

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski
CDO, Chief Data Officer

The chief data officer (CDO) role may be gaining influence, but a lingering question that both public and the private sector organizations have yet to answer is: Who should CDOs report to? According to experts, it really depends.

Government CDOs who have few or no staff may benefit from the collegiality of an IT department and a CIO supervisor, according to Jane Wiseman, Innovations in American Government Fellow at the Ash Center for Democratic Governance and Innovation, which is part of the John F. Kennedy School of Government at Harvard University. It also doesn’t hurt that the CIO’s budget is often bigger than a standalone CDO budget, which potentially means extra resources for data projects.

“If you’re part of a CIO shop, you work with people who are more niche technical people,” said Wiseman, who recently published Leading CDOs: A Framework for Better Civic Analytics. “And you’re part of a bigger team that might have expertise [necessary] to help problem solve.”

When a CDO reports to the head honcho — be it a mayor or a CEO — that sends a clear message that “the whole business is run on analytics,” Wiseman said, which has obvious plusses for the person in the CDO role. But that can also mean that the CDO works at the whim of the CEO’s every request.

“Frankly, when a CEO in government …  wants something, they typically don’t want it next week, they want it tomorrow or yesterday,” she said. “And some of these analytics things take a while to get an answer,” she said.

CDOs who don’t answer directly to the organization’s top boss typically have more space “to take on projects requiring more time to demonstrate results, including more methodologically complex analytics projects,” according to the Wiseman’s report.

Research consultancy Gartner, on the other hand, advocates that the CDO role or those who head up data and analytics should live on the business side of the house, and the numbers it gathered suggest businesses and governments are doing just that. In Survey Analysis: Second Gartner CDO Survey – The State of the Office of the CDO, only 16% of the 180 respondents, a mix of CDOs, chief analytics officers and high-level data and analytics leaders, said they report to the CIO. The number was almost double — 30% — for those data roles reporting to CEOs. Four percent of those surveyed came from the public sector.

Jamie Popkin, lead author of report, said he expects to see numbers continue to tilt in the favor of a CDO-CEO reporting structure. “The point we’re making is that this is a new business function equal to IT and HR, finance, supply chain, any of the operations departments,” he said to SearchCIO in November.

Still, both Popkin and Harvard’s Wiseman agreed there are no hard and fast rules here. Popkin said that, for example, at a mining company he worked with, it made sense for the CDO role to be rooted in the IT organization because the business had no interest in being a data steward.

Wiseman also said the CDO reporting line also ultimately depends on the company situation. “I don’t think there’s any one right way to do it. It’s just that there are tradeoffs,” she said.


January 31, 2017  8:36 PM

City CIOs gather to swap smart city experiences

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski

A select group of city CIOs and other technology policy makers gathered recently at Harvard University to participate in the 2017 Smart Cities Innovation Accelerator. The order of the day? To report on their smart city initiatives, or the integration of technology and sensors into city infrastructure. The group was encouraged to have frank discussions about what’s working — and what isn’t.

“The goal is that they to leave with deeper strategic insight for their cities,” said David Ricketts, Innovation Fellow at the Technology and Entrepreneurship Center at Harvard University. “That’s really what we’re trying to accelerate.”

Ricketts said the event was designed to give attendees the space and time needed for active participation and interaction. One of the issues Ricketts put up for discussion was about sensors on streetlights. Smart cities are often implementing smart street lights, which replace old-school bulbs for energy-efficient LED counterparts along with several sensors. The combination of LED lights and sensors on streetlights throughout a city adds up to more than the sum of its parts. As Ricketts put it: “It’s a mini-cellular network.”

Sensors can include Wi-Fi, but can they can also include things like movement detection and weather detection. The question for attendees was, do cities really need to be measuring all of it? Or are they falling prey to vendor pitches? One city revealed it decided to leave off the Wi-Fi sensor because it “already had local service providers providing the dense population with Wi-Fi, so there was no reason,” Ricketts said. The response was an “ah-ha” moment for the attendees who didn’t have Wi-Fi sensors but assumed they needed them.

“That’s why we’re focused on learning and on the bigger strategy picture as opposed to just talking about streetlights,” Ricketts said.

Necessary and rare
Opportunities for frank discussion and the exchange of ideas are rare and important, according to participants. “Frequently, smart city events are hosted by vendors, and the commercial interest drives a good deal of the content,” said Bob Bennett, chief innovation officer for Kansas City. “Harvard came at this from an academic and civic perspective, and the cities drove the content.”

One of the critical components for city CIOs and others is to figure out best practices, according to Bennett. “I can’t do the same type of public health data assessment that Louisville is doing,” he said by way of example, but he can work with his colleagues in Louisville to understand the benefits they gained from their work and bring those potential opportunities to his own city.

As David Graham, deputy COO for San Diego, jokingly put it: “plagiarizing equals iteration.”

“What’s happening in Boston that came from an idea out of San Francisco that’s being implemented in Kansas City that has applications for San Diego — that is how we get to the right solutions for smart cities,” Graham said. “If we’re not working with other cities, then we’re working in a vacuum.”


January 31, 2017  4:30 PM

Net neutrality law: Much ado about nothing?

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
Broadband, Internet service providers, Net Neutrality

Friday I wrote about Ajit Pai, President Donald Trump’s pick for Federal Communications Commission chairman, who will dismantle some internet regulations put into place in 2015.

This has caused real fear among proponents of net neutrality — the idea that internet service providers (ISPs) such as AT&T and Comcast can’t slow down or speed up internet traffic at their whim.

For example, 86% of 411 IT professionals surveyed on peer network Spiceworks worried that ISPs would reduce internet speed for certain types of content should the rules go away. And 84% were afraid providers might choose to block access to certain sites.

But how crucial is the net neutrality law to a free, open and competitive internet?

It’s not, said Johna Till Johnson, CEO and founder of Nemertes Research. Net neutrality, she said, is based on the faulty assumption that ISPs are “bad guys” who if not forced will stifle small startups.

“The logic behind passing [the 2015 rules] was to prevent carriers from doing something they hadn’t ever done before — only out of fear that because carriers could possibly do it and they’re big, bad entities and they might do it,” Johnson said.

Who’s afraid of ISPs?

Hiawatha Bray, technology writer for The Boston Globe, made a similar point. He spoke to Radio Boston host Meghna Chakrabarti on Boston radio station WBUR on Thursday.

“We’ve had this internet argument going on now for over a decade, and I keep waiting for this nightmare in which giant internet companies are seeking to suppress smaller startups and it just hasn’t been happening.”

That is, until AT&T started letting people who use its mobile service view DirecTV, its subsidiary, for free — a clear case of a provider showing favor to its own content. That’s unfair, Bray said, but “this is about the first time that anything like that has happened.”

What new FCC head Pai has made clear he doesn’t like is the reclassification of internet providers as public telecommunications services under Title II of the Communications Act, the law put in place in the 1930s to watch over telephone companies.

The FCC doesn’t regulate what ISPs can charge consumers; mainly it uses the law to police practices it sees as unfair or harmful to consumers — for example, it can target business alliances in which companies pay providers to prioritize traffic. And it has said what AT&T is doing with DirecTV is in violation of net neutrality laws.

A lighter touch on net neutrality law?

But Bray said Title II gives the FCC more authority than it needs to oversee an industry that “has thrived on a very light regulatory regime” before the 2015 rules.

“I think there should be a much lighter regulatory regime in which you would have Congress pass a law that would forbid certain specific unfair practices.”

And Johnson said net neutrality ignores the influence that companies like Google have on internet business.

“Because, hey, Google Play doesn’t carry your app, you’re toast,” she said. “And by the way nobody’s requiring Google Play to carry your app. They just do or don’t and nobody’s looking into that.”

Future tense

Finally, locking in “old-fashioned regulations” doesn’t take into account emerging technologies, Bray said.

Right now, for example, a company called Starry Inc. is testing a service in Boston that would offer high-speed internet — 200 to 300 megabits a seconds — at around the same price as today’s broadband.

If it works, Bray said, the need for net neutrality law would be moot — because more than just a few companies would control the pipes that pump the internet to users.

“You’d have real competition and the need to regulate the internet as a Title II kind of telephone-type monopoly completely vanishes,” he said. “I just don’t think it’s necessary now, and it’s certainly not going to be necessary when the newer technologies become available.”


January 31, 2017  3:34 PM

AI challenges and why CIOs can’t afford to fall behind

Sue Troy Sue Troy Profile: Sue Troy

Automation technologies – including physical- and software-based robots, artificial intelligence, machine learning, and other types of “learning” tech – are expected to change the future of work in a massive way in the coming decades. McKinsey Institute recently projected that over half of “worker activity,” or job tasks, has the potential to be automated as early as 2035. This tech will increase global productivity growth by 0.8% to 1.4% annually, the consultancy said.

For businesses, this represents tremendous savings as the cost of producing goods and delivering services goes down. And it’s not too soon for companies to take steps to take advantage of these technologies. Indeed, those that haven’t yet taken steps may be left behind – permanently, said Josh Sutton, global head of data and artificial intelligence at consultancy Publicis.Sapient.

“I do have [a] concern around whether [AI laggards] will be here a decade from now or not,” said Sutton in a phone conversation about artificial intelligence and IT last week.

He points to two reasons: Consultancies like Publicis are seeing a major shift in business success stories toward companies adopting a services-based approach rather than a product-based approach. In addition, the rate of change in industry is increasing and the cost of vendor lock-in is decreasing. “If you take those two things in parallel, what you’re seeing isn’t just a technology wave. You’re seeing a fundamental shift in how businesses and enterprises want to engage from a commercial point of view, and [if you’re] behind that shift, [it’s] going to be … very difficult to catch up,” he said.

Companies that wait for AI technology to develop into what Sutton calls “canned solutions” run the risk of forever playing catch-up – or failing completely — because the rate of change is so fast.

AI challenges: Education plan

He advises CIOs to map out a strategy for AI technologies. But they face a number of potential obstacles, he warned.

The first big challenge is for CIOs to educate themselves on the technology and discern between vendor hype and actual product capabilities. “There’s a tremendous amount of noise in the industry and there’s a very wide gap between what many product companies position themselves as being able to do versus the reality of what they can actually deliver upon,” he said.

Organizational expectations can also complicate a CIO’s efforts to set an AI technology strategy. Business leaders have a range of attitudes about AI, Sutton said, from those who believe it’s the answer to all of a business’ problems to those who are dead-set against investing in such an emerging technology.

“In only extremely few cases does the business leadership of a company have a well-grounded understanding of what’s possible vs. what’s not,” he said. IT leaders need to work to set appropriate expectations.

AI challenges: Single vendor vs. best-of-breed choice

Once CIOs and business leaders have a clear sense of what is truly possible with AI technology, they will face the question of whether to adopt a single vendor’s platform or to take a best-of-breed approach. Sutton advocates the latter.

“I believe the industry is too immature to commit to any single player. And I don’t think any single player provides you the full suite of capabilities you’d want across an enterprise yet,” he said.

Indeed, the decision will be a strategic bet for CIOs. Should they align themselves with a particular vendor in the hopes that its AI technology develops the capabilities that the IT organization needs? Or, should they put together a best-of-breed architecture, which is more complex than the single-vendor strategy but offers more flexibility?

“I have a strong bias that the services-based best-of-breed approach is a less risky road for companies to go down,” Sutton said. “But that’s not to say that some companies won’t make the right bet on the company that does get to that true enterprise-level platform first and standardize around them and have a nice clean, simple architecture as an advantage coming from that. But it’s a big bet to make.”

AI challenges: Data governance, culture shock

The final big challenge that Sutton sees relates to data strategy, because many AI use cases are dependent on data, whether from internal or external sources. Businesses need to have solid data governance policies around their internal data, and understanding how to augment that data with data from external sources is also very important.

“Even your data scientists and data teams that you historically have had on staff might not be particularly equipped to deal with the challenges that need to be addressed,” Sutton said.

That’s not to suggest that there’s a big shortage of people who are prepared to work with AI technology. Sutton said that he thinks there are a lot of technologists who are very interested in working with AI, and the tools are easy to work with. The skills required for AI development can be transferred from skills developed around other technologies, he said.

Sutton said he’s more worried about the cultural changes that businesses will face as AI technology is implemented. AI will increase human productivity, which will inevitably lead to staff reductions. Some of those reductions will be accomplished by attrition, but some of them will be more visible, much like those created by outsourcing in the past.

“Outsourcing was viewed with a very, very negative perception because it was taking jobs away and it created a backlash against the organizations and people within organizations driving that,” Sutton said. “I believe that there’s a risk that that might happen with relation to artificial intelligence solutions as well.”


January 30, 2017  3:29 PM

Waking up to benefits of copy data management software

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
Cloud-Based Services, Copy data, Disaster Recovery, Virtualization

Have you ever spaced out during a meeting only to come to after hearing something arresting — a proposal or possibility that might really affect how you do your job? Rosetta Stone’s Mark Moseley has — and it led to his purchase of copy data management software.

Moseley is vice president of IT at the language-learning company, which overhauled its business from shipping compact discs that teach Italian, French, Chinese and other tongues to offering courses online, which people can access on their PCs or smartphones.

Rosetta Stone’s growing number of online offerings resulted in an explosion in the amount of data that has to be stored. In fact, Moseley said, he was running out of floor space in his data center for servers that could hold all the data.

A big piece of the problem was a profusion of copy data — that’s the duplicate copies of data that backup software and other applications churn out and then stash on physical servers.

Copy that

Meanwhile, Rosetta Stone salespeople were pitching language-learning software to a technology company called Actifio. Its trade? Making software to deal with overflowing copy data. But copy data management software wasn’t the main reason Moseley agreed to a meeting with the Actifio folks.

“It was a favor really to our sales team,” Moseley said. “I didn’t care. I was mostly zoned out of the meeting.”

Then an Actifio sales rep mentioned being able to very quickly create a clone of Oracle Database — a separate copy of the database management system software and the business data — in just a few minutes.

“I’m like, ‘What?'” Moseley said. Rosetta Stone’s enterprise resource planning system runs on Oracle. “The cloning process for us would be days.”

Moseley was rapt, asking questions and imagining being able to swiftly clone his test and development environments, too. That would help his developer team move lithely from one project to the next.

For example, Moseley said, if there’s a problem in an application’s production environment — the one users use — developers could quickly spin up a cloned environment and fix the problem there.

Another scenario: “You’re in the middle of developing something. You hit an anomaly; you blow up your environment. We’ve got a deadline. I can’t wait two or three days I need a new one now.”

Recipe for disaster recovery

Then Moseley saw other benefits the copy data management software could have for his IT operations: Actifio’s cloud-based application lets him virtualize his data, hardware platforms and storage and backup applications. That allows him to store less data.

Actifio also lets Moseley set up disaster recovery sites in the cloud — a huge boon.

“I don’t know how often most companies have disasters, but we don’t have them very often, and we don’t have the money to spend on having a completely new disaster recovery site set up,” Moseley said.

“However, with this platform I can very cheaply and easily have a disaster recovery site set up in the cloud ready to go. It’s costing me almost nothing — until I have to use it.”

That gives Moseley a sense of security that he can then present to Rosetta Stone’s customers, who share their contact information with the company.

That’s now. Back during the sales pitch — after being shaken from indifference — Moseley started piecing together “all of the different things that I can do with this platform and I’m hooked. And really it all started with just talking a little bit about Oracle.”

Mark Moseley talks about Rosetta Stone’s digital transformation strategy in this SearchCIO interview.


January 24, 2017  2:51 PM

The good and the bad of a multicloud strategy

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
Cloud Computing, Multi-cloud

Are two clouds better than one? For many organizations moving part or all of their IT operations into the cloud, they are. Three and four are even better.

Many organizations today follow a multicloud strategy — putting some workloads in Amazon Web Services, others in Microsoft Azure and yet others in IBM Bluemix or Google Cloud Platform. That way, they’re not locked in to one provider, said Donna Scott, an analyst at market research house Gartner.

“So they have options when they are deploying,” Scott said in a recent webinar on cloud computing for CIOs. “Or if they become unhappy with one provider or one provider decides to raise their rates by 50%, they have options — competitive options — to go after.”

Other companies use a multicloud strategy for more than just mitigating risk, sister site SearchCloudComputing reported late last year. They might put different applications in different providers’ clouds because of lower costs or better performance. That’s a shift that has happened as cloud has become more popular.

“Multicloud today is much less about resilience and hedging bets and more about matching workloads to services,” said Melanie Posey, analyst at market researcher IDC, in the article.

But there are pros and cons to a multicloud strategy. The big pro is choice, Scott said. With applications hosted by different cloud providers, a CIO’s eggs are in different baskets, “just in case your partnership with a vendor goes awry, they don’t meet service levels, they raise their prices.”

There a variety of negatives to consider, too, Scott said. Organizations need to learn how each cloud provider they’re working with works. The cloud computing industry still largely lacks standards, so providers have different capabilities and APIs.

And each may require different cloud product managers — a new role for many organizations — to determine what applications to keep on premises and what to move in the cloud and what specific services offered by the cloud provider to use.

So if a company has applications split among two cloud providers, Scott said, “you might find that one product manager can manage both; you might find you have to have more skills and expertise.”

To learn more about cloud product managers and other key cloud roles, read this SearchCIO tip.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: