Credential stuffing doesn’t often make the news, but it’s a $10 billion a year problem, according to Shuman Ghosemajumder, CTO at Shape Security in Mountain View, Calif. The term describes the practice by cybercriminals of taking usernames and passwords they’ve collected from one breach and using them to gain access to other accounts.
Breaches like these create “a sort of ecological disaster for the internet,” Ghosemajumder said at the recent EmTech conference in Cambridge, Mass. “[That’s] because the usernames and passwords are valid not just to the site that was breached, but across the entire internet because of the fact that everyone randomly reuses the same passwords.”
In an effort to combat credential stuffing attacks, Shape Security announced the release of Blackfish, a new artificial intelligence system that identifies freshly stolen usernames and passwords — those that have not yet been disclosed or surfaced on the dark web.
“Cybercriminals often don’t make [usernames and passwords] available until they’ve extracted all of the value that they want themselves,” said Ghosemajumder, who led product management for click-fraud protection at Google for more than five years. “So there’s this window of time where users are still vulnerable.”
Shape Security already had a machine learning platform to detect credential stuffing attacks by identifying patterns of behavior that look human but are really performed by automated systems. One example is efficiently moving the mouse in a straight line from the username field to the password field to the submit button — something humans cannot do, according to Ghosemajumder.
Blackfish takes it one step further by identifying the compromised usernames and passwords and storing the information in a common knowledgebase. Any subsequent logins using the stolen credentials will be checked against the knowledgebase and invalidated.
“What this creates is a data-driven defense network, which is constantly learning, constantly improving and capable of autonomously defending itself,” Ghosemajumder said.
Shape Security doesn’t store the actual usernames and passwords. Instead, it uses a Bloom filter or a probabilistic data structure of the information, which enables verification but makes the information useless to hackers. “It’s kind of like how Touch ID on the iPhone doesn’t store a picture of your fingerprint,” Ghosemajumder said. “But, instead, looks at different features that are associated with your fingerprint and then stores a mathematical representation of that.”
The comments section at the bottom of news stories, blogs and Instagram posts has become a place for incivility. Without the resources to monitor every single comment written, news organizations such as NPR are beginning to rely on a different tactic: They’re disabling the feature altogether.
But doing so has consequences. “There’s actually a contraction of space online to meet each other and exchange ideas,” said Yasmin Green, director of research and development at Jigsaw, an incubator within Alphabet Inc. (parent company of Google) that’s attempting to develop technology to solve geo-political problems.
During a fireside chat at the recent EmTech conference in Cambridge, Mass., Green described how Jigsaw has built a tool, dubbed Perspective, that can flag “toxic” comments. Perspective is an API that relies on artificial intelligence. Specifically, it uses natural language processing (NLP) that, unlike keyword-based systems, uses patterns to understand context. Perspective is capable of identifying tone and learning slang, distinguishing between, say, you’re killing it today and I’m going to kill you today, according to Green.
Interest in the tool, so far, is encouraging. Jigsaw is partnering with news organizations like The Economist and The New York Times to moderate the comments section more efficiently and encourage community discussion. But applying AI to moderate language is still a work in progress.
Perspective was trained on internet data, and that can introduce a new wrinkle: Human bias. At one point, the tool began to identify words like gay, feminism and Muslim as toxic — that is, as words making people want to leave a conversation. These are terms that, online at least, are “disproportionately skewed toward comments that have a negative effect on people,” Green said. The model started to assume the words intrinsically had negative properties.
So the model had to be retrained on news articles that mention termslike these in a neutral way to remove its bias, according to Green. And, in the greater scheme of things, to keep discussion forums open. “The goal, of course, is to expand the space we have online to meet each other to create more inclusive conversations,” she said.
Perspective is now available to the public — and so is the chance to break the model again. “Interestingly, when you offer a group of smart people an AI to use, their instinct is to see if they can trick it,” Green said. “So please do try and trick it because that’s actually very helpful to us.” Even with artificial intelligence, perfection is a goal, not a destination.
MIT Technology Review’s annual list of Innovators Under 35 recognizes individuals who are tackling hard problems and making notable advances in the areas of AI, virtual reality, robotics and security. This year’s list included Ian Goodfellow, the inventor of generative adversarial networks, and Franziska Roesner, who focuses on augmented reality (AR) security. Recipients are given an opportunity to make a short presentation of their work at Tech Review’s annual EmTech conference in Cambridge, Mass. And here’s what Goodfellow and Roesner — whose research was selected here for their enterprise applicability — had to say.
Ian Goodfellow & generative adversarial networks
One pain point for machine learning is the man hours it takes to label training data. “At Street View in 2013, we built the first system that was able to read with super human accuracy,” said Ian Goodfellow, staff research scientist at Google Brain, the search engine’s deep learning research project. “And we did that using a database of over 10 million labeled photos.”
In 2014, Goodfellow set out to make this process more efficient. Instead of relying on traditional model optimization to minimize the difference between what a model predicts and what a human says a model should predict, Goodfellow opted for game theory. He calls the technique generative adversarial networks (GANs), and it’s creating a lot of buzz in AI circles.
Two unsupervised neural networks are pitted against each other in a zero-sum game “to learn everything they can about the distribution generating the training data,” Goodfellow said. One model, the discriminator, tries to figure out if an image is real or artificially created. The other model tries to generate realistic-looking images to trick the discriminator model into thinking they’re real. The game is played until the discriminator model cannot distinguish real from artificially-created images.
The neural networks teach each other without the use of labeled data, and the results so far are promising. GANs that used 100 images of digits achieved the same level of accuracy as traditional models trained on 60,000 images just a few years before.
Franziska Roesner & AR security
The benefits of AR tech are easy to dream up: They can overlay instructions on how to fix a sink, provide digitally generated directions onto physical roads, and act as a guide when cooking dinner.
“But there’s also a potential dark side,” said Franziska Roesner, assistant professor of computer science and engineering at the University of Washington. What happens, she asked, when a user accidentally installs a malicious app that can deliberately distract or even obscure objects from sight? Or when users learn that their AR apps or platforms are recording and analyzing their every move?
“My work addresses the critical gap with emerging augmented reality technologies — protecting security, privacy and safety of end users,” she said.
Roesner and her team started by identifying and classifying potential risks and vulnerabilities — from data sensor protection and security to safety risks from virtual content. She and her team then began working on an AR platform that can mitigate vulnerabilities by, say, enforcing safety policies on the virtual content that applications try to display, she said.
Her research is only the beginning, but paying attention to these vulnerabilities while the technology is still not widely deployed is critical, Roesner argues. “My work lays the foundation to enable these technologies to reach their full potential, unhampered by security, privacy and safety risks,” she said.
Let me guess. You want to find AI talent to build a self-thinking artificial intelligence platform that will revolutionize your business and catapult your organization to the forefront of your industry.
You and everyone else.
Gartner analyst Peter Sondergaard gave a sobering look into today’s demand for AI professionals – people with hard-to-find combinations of data analytics, programming and business development skills. Sondergaard spoke to thousands of CIOs and other IT leaders at Gartner’s annual Symposium/ITxpo earlier this month.
With data and graphics gliding across a screen behind him, he shared Gartner data on the worldwide labor market. Aggregate talent pool: 1.5 billion. Candidates for IT jobs: 15 million. Job seekers with AI experience? A minuscule 1,300.
And in a booming tech market such as New York, the AI talent pool is 32 people deep.
“Sixteen may talk to you, and only eight are actively looking,” Sondergaard said. “Trust me, those eight individuals sit on the right side of the labor market for them.”
Those candidates are sought by not just big tech companies. Transportation, media and banking are among the industries also on the hunt, Sondergaard said.
And if your company isn’t looking for AI talent now, chances are it will be soon.
So how can you compete? Sondergaard recommends “contracting, renting, sharing, nurturing from within” — so taking advantage of external AI services, cultivating expertise in your own workforce or both.
That’s just what Bill Schneider’s company is looking to do. Schneider is vice president of IT at Pioneer Energy Services, a provider of drilling and well services, in San Antonio. The company is determining how AI can help make its business more efficient and improve its services — and Schneider knows how hard it is to find the right people in the labor market.
“It’s crazy how much of a population doesn’t exist for that,” he said.
So Pioneer Energy Services is looking inside its own organization for people with a passion for the machine learning algorithms, programming languages and deductive reasoning that make AI tick. Once those folks are identified, they’ll be given training on AI and data analytics tools.
The company is also looking to IT consultants for help and trying to “find the right partners” to work with and bolster its analytics practice, Schneider said.
“It’s going to be challenging to get those individuals from the outside world. So we’re focusing on internal,” he said.
To learn more about what Pioneer Energy Services wants from AI, read this SearchCIO report.
NEW YORK — Hardly a day goes by without a prediction about how artificial intelligence will radically change our lives — driving our cars, running errands, sucking up jobs. But what is the state of enterprise AI?
In a McKinsey & Co. survey of 3,000 executives conducted earlier this year on enterprise AI, only a small percentage reported they are using AI either in their core business or at scale. The vast majority of respondents — 80% — are either thinking about AI or experimenting with it. In both scenarios, companies are still figuring out use cases and what kind of talent and technology is necessary to reap value out of their AI investments.
Michael Chui, partner and head researcher at the McKinsey Global Institute, used the phrase “scratch the surface,” to characterize the involvement of CXOs in AI projects. Wholehearted buy-in is what it will take for enterprise AI to succeed.
“We do think that that’s actually going to be required in order to move the needle from a corporate standpoint,” Chui said during his presentation at the Strata Data Conference.
CIO role in jump-starting enterprise AI
Still, CIOs shouldn’t sit idly by. Chui said they can lay the groundwork for enterprise AI by understanding where the potential is and where companies want to prioritize “based on both the size sides of the prize as well as the ease of capturing that prize.” That may come off as an obvious piece of advice, but “most people don’t do it,” he said. “Lots of people just buy what’s in the sales person’s bag when they show up.”
And CIOs can continue advancing the company’s digitization efforts. “If we just look at technical assets deployed, what we found was those companies that have actually moved the furthest in their digitization story are also the ones who are able to take most advantage of AI,” Chui said. “So there are no shortcuts here, is another way to put it. If you’re going to go on an AI journey, you simultaneously need to go on the digitization journey.”
Understanding the data ecosystem, digitizing infrastructure, accumulating training data, and making data easily to get to are the foundation of enterprise AI.
Early adopters have moved beyond experimentation, he said, to integrate AI into core processes and find ways to scale the technology across the enterprise. The McKinsey research shows that, as with cutting-edge technologies that have come before it, CIOs should expect talent and culture — not technology — to be the biggest enterprise AI hurdle.
“If you have terrific insights, if you have a better forecast, and you don’t change the way that your company operates … you’re not going to move the needle. So you’ll need these two-handed athletes,” he said, referring to employees who are capable of solving the technical problems as well as moving the organization forward.
AI market behavior
While Chui’s advice may sound like familiar territory, not everything about the adoption of AI technology is following the usual patterns.
One striking difference is how the AI market is currently shaping up. Venture capital companies, which typically take the lead in emerging tech by investing heavily in startups, are being outpaced by the investments big companies are making in their internal AI capability. Indeed, companies appear to be spending three to four times more on their internal AI research and development than the amount currently being invested in startups by the VCs, according to McKinsey research.
“That isn’t necessarily what we think of as happening early on in the development of technology,” Chui said. “We often think VCs invest in the small companies, they become big companies, and the big companies are slow getting to it.”
Still, for VCs, the AI market is “one of the areas that has the greatest growth rate in terms of where the dollars are going,” Chui said. And, at least in 2016, VCs are placing “a slight majority” of their bets on machine learning technologies .
Chui warned the Strata audience to take the data he was presenting with a grain of salt. What constitutes an external investment in AI can be hard to categorize and internal investments can be hard to track because they are not precisely reported.
“But it is interesting that even at this early stage, a lot of the investment is being made by the giants, and it’s being made on internal R&D,” he said.
With machine learning, data scientists have to perform a task called feature engineering. “People get the incoming data, and they prepare it, and they clean it, and they maybe manipulate it in a way that’s going to give them the relevant information,” said Edd Wilder-James, former vice president of technology strategy at Silicon Valley Data Science and now an open source strategist at Google’s TensorFlow, during a presentation at the Strata Data Conference.
Take the use of machine learning to determine if it’s day or night, and the data used to train the model is photographs. Before the model is released into production and before it’s even trained, data scientists have to determine what features in the data will help the model learn. “Our feature engineering might be as simple as counting the number of dark pixels at a certain threshold: What percentage of the image is dark?” he said.
Pinning down features and thresholds is a difficult but vital process that requires domain expertise and knowledge of the data, according to Wilder-James. “With this kind of machine learning, a lot of the effort goes into … figuring out what are the features and making the damn thing work,” he said.
With deep learning, data scientists can skip the feature engineering step. The model would instead rely on enormous training data sets to figure out which is which on its own — and time.
“It’s slow. We’re talking days, weeks even, maybe a month to train a model,” Wilder-James said. “It requires a large amount of training data to get right. This is definitely a big data problem, in that sense.”
And be warned, deep learning models can also be fooled. Generative adversarial networks can trick a model into seeing something in the images that can’t be detected by the human eye. This creates big security implications, Wilder-James said.
The recent string of sexual harassment scandals in Silicon Valley may have reinforced the technology hub’s reputation as an unfriendly environment for women — but that’s not slowing the demand for female tech leaders across industries.
Ask Eric Sigurdson. A consultant at employment agency Russell Reynolds Associates, he matches candidates for senior IT leader positions with big corporations looking for tech and leadership skills. Many are explicitly seeking to expand diversity, including gender diversity, in their workplaces. His word describing the climate for women in technology: “optimistic.”
“We recruit a lot of women, particularly to CIO roles and to divisional roles,” Sigurdson said. “The first thing [companies] ask for or the last thing they ask for is, ‘We need a diverse candidate in this role if we can find them.'”
He finds them. Sigurdson put Kristy Folkwein, who was CIO at Dow Chemical, into the IT chief position at food processor ADM. He put Mary Gendron, from Hospira, into Qualcomm; and Adriana “Andi” Karaboutis, from Biogen, into National Grid.
Even companies in industries known for employing few women are seeking female tech leaders. Take industrial manufacturing — women make up 24% of its total workforce, 18% of its managers and just 12% of its executives, according to a Morgan Stanley study published in May.
Sigurdson recently spoke to a billion-dollar manufacturer in Wisconsin about female candidates for tech roles. And it is willing to look beyond the insular manufacturing world for them.
“They can learn the industry,” Sigurdson said. “They can figure it out. Frankly, they’ll bring an outsider’s perspective to the organization that can be really welcoming. And it’s being driven in large part because they need diversity on the leadership team.”
Sigurdson, who was in sales and marketing roles at IBM in the 1980s and 1990s, was taking a page from Louis Gerstner, the former CEO of the tech goliath. During his tenure, Gerstner launched a task force that sought to understand differences among groups of people to appeal to a wider set of employees and customers. That resulted in greater gender and ethnic diversity within the company’s ranks.
Ensuring that there are more female tech leaders — and that more women and minorities are represented in IT — starts with recruitment on college campuses, Sigurdson said.
“That’s your funnel for diversity,” he said. “You have to have a good representation from all different classes of people to be able to 20 years later have executives that reflect the diversity of the communities they serve.”
To learn what one coding boot camp is doing to promote gender diversity in technology, read this SearchCIO report.
Government agencies know they need to do a better job of using data-driven insights to offer better services — and their smartphone-connected constituencies won’t let them forget it.
“We’re all digitally empowered, so our expectations rise practically continuously — they never really stop rising,” said Michael Barnes, an analyst at Forrester Research. “We expect more from all the different institutions or agencies or companies or anyone we deal with because we know that that information is available.”
I talked to Barnes, who lives in Sydney, by Skype earlier this month, as Texas began its cleanup after Hurricane Harvey and Irma plowed over the Caribbean toward Florida. Predictive analytics played a big role in forecasting the paths and potential devastation of those storms — and increasingly, people want that kind of future-telling information from their governments when natural calamities are bearing down, Barnes said, because they know what technology can do.
‘A virtuous circle’
For example, it may be possible to predict potential flooding from a coming storm in a certain neighborhood of a city and locate people in that neighborhood, their smartphones serving as beacons, Barnes said. A government agency can send people an alert “that someone a few blocks away maybe doesn’t get because they can pinpoint the potential ramifications of a particular disaster down to that level.”
And when people get those data-driven insights, he said, the reactions and responses to it get shared by tweet or by text — for example, Water’s rising downtown, get out! — to others the government alert might have missed.
“So it becomes a bit of a virtuous circle,” Barnes said. “They’re going to share that with other folks. And if organizations are in fact increasingly insights-driven, they will act on the responses to their services as much as to the external sources of data they’re accessing anyway.”
I wondered then about competition facing municipal governments. Could a technology vendor offer an app using data-driven insights to provide more accurate warnings about natural disasters than any government ever could?
“I suppose anything’s possible, but I don’t think it’s realistic,” Barnes said. “There’s always a potential for a business to pursue a service if there’s profit in it. In the case of early storm warnings, I’m not sure what the business model would be.”
Open to open data
What’s far more realistic, he said, is tech companies offering up their vast amounts of data — at no charge — to cities and towns so they can act on it. Ridesharing company Uber, for instance, this year started providing data on the trips its app tracks to governments and other organizations so they could study traffic flows and ultimately make better decisions on transportation. That may help in designing evacuation routes, for example.
For Uber, the benefit of sharing data with government agencies is clear: “If that improves the agency’s ability to act — because they have access to, say, the traffic patterns, as an example — that is in Uber’s interest in terms of overall marketing and brand building,” Barnes said.
Waze is another company dealing in open data — that is, data available for free to everyone. A 2016 Forrester report Barnes wrote cited Montreal and Jakarta, Indonesia, as cities that are collaborating with Waze on its Connected Citizens program. The Google-owned company, whose navigational software is based on user-submitted information about road closures and traffic conditions, shares its data with partnering cities, enabling them to “respond more immediately to accidents and congestion and reduce emergency response times,” the report read.
Of course, there’s always a chance for an early-storm-warning app on your iPhone XX, but probably not anytime soon.
“Near term, it’s far more likely that firms will look to share their data with government agencies and allow the government agencies to take the lead on things like emergency response,” Barnes said.
To learn more about how municipal governments can use data for better decision-making before, during and after natural disasters, read this SearchCIO report.
The hype around AI has reached decibel levels so high that CIOs may wonder why their organizations haven’t pulled off a bonafide AI project. Whit Andrews, analyst and AI agenda manager at Gartner, is of the mind that it’s way too early to be panicking over the role AI will play in enterprise IT strategies.
He tells his clients they should look at AI projects as experimental and thus be guided by the strategies and governance policies used for any experimental opportunity. But he also recommends that experimental AI projects be used to address historical challenges for the organization — specifically, pain points that haven’t been solved because there will never be enough employees to solve them.
The approach, Andrews contends, will move the organization in the right direction — to pin down where the organization can improve, figure out what skills to hire for, increase the use of data science, exploit what infrastructure capabilities are needed — and create the right environment for future AI projects.
He provided an example during a recent webinar presentation of an insurance organization that used image analytics to address an historical problem. The company has to determine if homes have architectural features that are likely to sustain damage during a major storm. Because the company doesn’t have the manpower to send an insurance representative out to every dwelling, it asks the property owner if those features are present.
“And when they get the response, they have to decide: Should we take the response at face value? Should we check the response with a human visit? Or should we decline the response?” Andrews said. “If the company refuses in the future to fulfill the claim because what the homeowner described was not factually correct, that’s an enormous challenge from everyone’s perspective.”
Rather than rely on property owners, the insurance company has started analyzing publicly available images of the dwellings to determine if the architectural features are present. “That means you’re not sending out somebody for every single check. And you’re not spot-checking either. You’re actually doing directed checking,” Andrews said.
If the analysis determines the architectural features are absent, then the company sends out an employee to double check, which Andrews described as “an effective use of your existing staff.”
LAS VEGAS — For an example of the transformative role drones — or unmanned aerial vehicles, as they’re known in the industry — will play across industries, just consider, said Michael Huerta, administrator of the Federal Aviation Administration, what happened after Hurricane Harvey struck Texas last week.
“The hurricane response will be looked upon as a landmark in the evolution of drone usage here in this country,” Huerta said during his opening keynote at this year’s Interdrone conference in Las Vegas.
Local, state and federal agencies, as well as companies across verticals, turned to drones to identify, assess and assist in the aftermath of the devastating Category 4 hurricane. Here are some examples of how UAVs were used in disaster response and recovery:
- Fire departments and county emergency management officials used commercialUAVs to check for damage to roads, as well as to inspect bridges, underpasses and water treatment plants to determine infrastructure that required immediate repair.
- Search and rescue workers used commercial UAVs to find civilians in desperate and unsafe conditions.
- A railroad company used commercial UAVs to survey damage to a rail line.
- Oil and energy companies used commercial UAVs to spot damage to their flooded infrastructure.
- Telecom companies deployed commercial UAVs to assess damage to their towers and associated ground equipment.
- Insurance companies used commercial UAVs to assess damage to neighborhoods.
In many situations, Huerta said that these unmanned aircraft were able to conduct low-level operations more efficiently and safely than manned aircraft. Most local airports were either closed or dedicated to emergency relief flights in the immediate aftermath of the storm, Huerta said, and fuel supplies were critically low.
“Every drone that flew meant that a traditional aircraft was not putting additional strain on an already fragile situation,” he said.
Huerta’ discussion of the important role drones played in the disaster response to Hurricane Harvey also came with some self-congratulation: He cited the FAA’s ability to quickly authorize unmanned aircraft as a critical to the success of these operations.
Much of the airspace above Harvey-damaged areas was subject to temporary flight restrictions that required the FAA’s authorization. Flooded with authorization requests, Huerta said the FAA decided that anyone with a legitimate reason to fly an unmanned aircraft would be able to do so. Because of this game-time decision, the agency was able to approve most cases of individual UAV operations within minutes of receiving the request.
By the end of last week, the FAA had issued over 100 authorizations of unmanned aircraft.
It’s a step in the right direction for an oversight agency that’s gotten flak in the last year– as Huertas pointed out — following the rollout of its new regulations targeting small unmanned aircraft.
Disaster response is just one example of the role commercial UAVs have — and will continue to have — across enterprises. Huerta said people will continue to be surprised by how and where drones will be used, comparing the evolution of these unmanned devices to the early days of aircraft.
“A century ago, people couldn’t foresee that clunky wooden fabric biplanes would morph into sleek aluminum jets, some capable of flying at supersonic speeds,” Huerta said. “And today we can’t possibly predict everything drones will be doing five or 10 years down the line; maybe even five or 10 months down the line.”