Computer security image via FreeImages
As 2018 continues forward, check out some of the biggest cybersecurity trends to monitor throughout the rest of the year in this week’s roundup.
1. Top 2018 cybersecurity trends to watch out for – Mekhala Roy (SearchCIO)
A glance at IT news shows cybersecurity trends remain on companies’ radar. At the CIO Boston Summit, Cybereason’s Jessica Stanford discussed steps to defend against risk.
2. C3 IoT simplifies AI and IoT app development – Darryl Taft (SearchCloudComputing)
The C3 IoT low-code development platform helps developers of all skill levels build AI and other next-generation applications without having to write any code.
3. Twitter bug exposes passwords of all 336 million users – Michael Heller (SearchSecurity)
On none other than World Password Day, a Twitter bug was announced that led to the passwords of all 336 million users being stored in plaintext in an internal log.
4. Why going private should help grow Mitel cloud – Jonathan Dame (SearchUnifiedCommunications)
A private equity firm plans to acquire Mitel for $2 billion in the second half of 2018. The move should give the vendor more flexibility, as it pushes adoption of Mitel cloud.
5. Qlik readies new Qlik Sense features for cloud, AI, big data – Craig Stedman (SearchBusinessAnalytics)
Self-service BI vendor Qlik is beginning to add a promised set of advanced functionality to its Qlik Sense software, but the features will be delivered in stages.
Jobs image via FreeImages
By James Kobielus (@jameskobielus)
People’s anxiety over artificial intelligence’s job-killing potential has gotten out of hand. When I see AI discussed anywhere now, the popular assumption seems to be that it will destroy far more jobs than it will create, and that those it will create will be lower-skilled, lower-paying, and dead-end.
Let’s examine these assumptions head-on. For starters, there’s an obvious macroeconomic fallacy with the notion that AI will cause mass unemployment. If AI is to become a pervasive factor of production in 21st century economy, there must be strong demand for AI-infused goods and services. But that demand won’t materialize if there’s no purchasing power behind it. And that purchasing power must come from people with income-paying jobs. Consequently, the AI revolution will strangle itself if it kills many people’s current jobs and the economy doesn’t replace them with other jobs in a reasonable amount of time.
Many observers admit that AI may boost productivity throughout the economy and even some day produce more jobs in the aggregate, though some economists doubt whether that’s a sure bet. But many argue that this will happen only after AI triggers major structural dislocations, as established employers go belly-up, entire professions become obsolete, and formerly stable industries undergo wrenching transformations. In these scenarios, the culprit is AI-powered automation, and the likely victims are any occupation in which work is largely repetitive, codifiable, routine, and predictable.
Considering that this describes a fair amount of most people’s working lives, fears of AI-based—or more broadly, algorithm-driven–automation tend to know no bounds in many popular discussions. The extreme scenario to which people’s science-fiction-stoked imaginations usually jump is replacing everybody with AI-infused robots (which is absurd from an economy-wide standpoint, for the reason I cited in the previous paragraph).
Let’s get real. It’s more reasonable to ask what sectors of the economy will AI-driven automation impact first, fastest, and most disruptively. However, nobody can agree where the hammer will hit hardest: working class, administrative, technical, creative middle management, or other job categories. For example, a recent Pew Research Center survey of “experts” (tech professionals, academics, researchers, journalists, futurists, entrepreneurs, etc.) found no real consensus on this issue. Roughly half the respondents expect AI to displace vast numbers of blue- and white-collar jobs in the aggregate, while the other half forecast an employment impact that’s positive or neutral.
In another unsatisfying finding, due both to the long time frame and the overbroad focus on “computerization” writ large (rather than AI as a subset of that), Oxford University researchers found that “47 percent of US workers have a high probability of seeing their jobs automated over the next 20 years.” Considering that two decades is a significant span of time in which most workers probably will land replacement jobs, that sounds more like a macroeconomic productivity rise from all information technologies, rather than a sudden shock to the economy from the just the range of techs lumped under AI.
If we limit our focus to AI’s likely impact on automatable tasks (as opposed to jobs), we can get a better handle on how particular jobs might fare in this new economic order. The Organization for Economic Cooperation and Development estimated that 14 percent of jobs in its member countries are highly automatable at a task level, hence likely to be impacted by AI and related technologies, with many of those jobs being low-skilled in nature.
As is common in such studies, the researchers assume that the tasks least likely to be automated are those involving such faculties as teamwork, strategy, analysis, creativity, judgment, critical thinking, and emotional intelligence. However, they don’t consider the possibility that people currently in highly automatable jobs may evolve their skills toward positions that not only leverage those faculties but are highly useful for AI—especially those that involve science, technology, engineering, and mathematics. For example, people are flooding into the fast-growing field of data science to develop, manage, and otherwise get in on the AI revolution before it overturns their careers.
Many journalists tend to reduce the topic of AI’s jobs impact to speculative narratives about the possible impact on particular professions. For example, we’ve all read the stories speculating how AI-powered autonomous vehicles may throw truckers and cab drivers out of work (as if they don’t have enough to worry about from sharing-economy disruptors such as Uber, who, of course, are also big AI users).
Taking this approach is an excellent way to fabricate scary stories while distracting from the fact that growth sectors such as self-driving cars will also create jobs in manufacturing, programming, operations, customer service, and so on. The net jobs impact could be positive or negative, depending on how long a leash you give to your practical imagination.
Another approach for identifying AI’s likely impact on employment is to estimate the potential impacts of specific technologies under this umbrella, including deep learning (DL), machine learning (ML), natural language processing (NLP), and computer vision. Clearly, that can be very difficult to do on a speculative basis for any emerging technology, or even retroactively after the technology has diffused throughout the economy.
For example, McKinsey claims that “most jobs created by technology are outside the technology-producing sector itself” and estimates “that the introduction of the personal computer….has enabled the creation of 15.8 million net new jobs in the United States since 1980, even after accounting for jobs displaced.”
Under the AI umbrella, will DL, for example, create more jobs than it destroys? The answer depends, of course, on whether deep neural networks become as pervasive a productivity booster as PCs were. It also depends on the extent to which there’s a concomitant increase in the number of data scientist positions needed to build, train, and manage these models, as well as the extent of automation in those professions. These are wildcard forecasting variables that people are free to speculate on but for which no one has a good crystal ball.
However the employment trends shake out, humans will remain at the center of the AI loop. In this regard, MIT Sloan Management Review published an interesting article last year in which they broke out future AI employment opportunities into three broad categories:
- “Trainers”: This is their term encompassing all AI-focused DevOps roles. These jobs involve building and tuning the ML, DL, NLP, and other statistical and rule-based logic that fuels AI’s algorithmic smarts.
- “Explainers”: This refers to all AI-focused accountability and transparency roles. These jobs track, audit, and report on the end-to-end AI DevOps pipeline, providing stakeholders with visualizations, reports, and other outputs that explain what models, data, and other algorithmic artifacts drove any given algorithmic outcome.
- “Sustainers”: This refers to all AI-focused management, monitoring, and governance roles. These jobs ensure that unintended AI consequences in operational runtime environments are promptly addressed. Much of what they do is exception handling: a human steps in when AI-driven systems can’t be trusted to produce the optimal outcome. Typically, a “sustainer” feeds human judgment back into the algorithm so it learns to tackle the problem better in the future.
Another way of approaching the AI “human in the loop” phenomenon is to consider the ongoing need for “AI safety” jobs (which some refer to as “responsible AI”). This is what I like to think of as a “shepherding” role, giving humans responsibility for building and deploying AI applications, managing and controlling them in production environments, and keeping them compliant and accountable to legal, cultural, and other mandates. In this regard, I break out future AI employment categories as follows:
- AI alignment: These jobs focus on ensuring that AI-driven systems align with stakeholder values, comply with privacy mandates, are free from socioeconomic biases, conform to ethical and moral principles, and are capable of compromisingin exceptional cases.
- AI accountability: These jobs ensure that there’s always a clear indication of human accountability, responsibility, and liability for algorithmic outcomes and that AI-driven processes are transparent, explicable, and interpretableto average humans.
- AI resilience: These jobs stand ready to throttle AI-driven decision making in circumstances where the uncertaintyis too great to justify autonomous actions. They manage failsafe procedures so that humans may take back control when automated AI applications reach the limits of their competency. They also ensure that AI-driven applications behave in consistent, predictable patterns and are free from unintended side effects, even when they are required to dynamically adapt to changing circumstances. They are responsible for protecting AI applications from adversarial attacks that are designed to exploit vulnerabilities in their underlying statistical algorithms. And they oversee engineering of AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained.
I don’t imagine that everybody who loses their job directly or indirectly due to AI will land in one of these nouveau positions. But it’s highly likely that these AI shepherding functions will become core administrative tasks managed by people in many different positions in future businesses.
You can’t build your business on AI—or disrupt your industry—if you don’t keep your human staff firmly in the loop on all of this.
Security image via FreeImages
At RSA Conference last week, a panel of female cybersecurity professionals talked about the best advice and encouragement they received when starting out in cybersecurity. Check out their advice in this week’s roundup.
1. Women in cybersecurity discuss hiring, advice and being mentors – Michael Heller (SearchSecurity)
A panel of women cybersecurity professionals at the RSA Conference discussed ways to find the best job candidates, the best advice they’ve received and how to be better mentors.
2. Workplace ‘mindfulness’ as coping mechanism for AI disruption – George Lawton (SearchCIO)
Two tech titans investing in the AI tools that automate jobs are also sinking money into workplace mindfulness programs aimed at helping employees become better at being human.
3. ATA18 to explore the impact, potential of telemedicine – Tayla Holman (SearchHealthIT)
AI in healthcare will feature prominently at the 2018 ATA annual conference. The role of telemedicine in the fight against the opioid crisis will also be discussed.
4. What Salesforce’s CloudCraze acquisition means for customers – Jesse Scardina (SearchSalesforce)
CloudCraze president calls the acquisition by Salesforce ‘a huge win’ for its customers and for Commerce Cloud customers, with the capability of selling B2C and B2B on one platform.
5. GDPR compliance requirements don’t come cheap – Trevor Jones (SearchCloudComputing)
GDPR has more teeth than any previous data privacy directive, but that looming threat hasn’t motivated many companies to get their audit trail in order.
Security image via FreeImages
What cybersecurity improvements do you think are most pressing? Find out what RSAC keynote speakers said the infosec community must focus on in order to enact real change in this week’s roundup.
1. Teradata applies time series analytics tools to IoT data – Jack Vaughan (IoT Agenda)
Data warehouse pioneer Teradata looks to ease IoT data analysis with capabilities that address skills gaps on time series analytics techniques, which are coming more to the fore.
2. RSAC keynote speakers push teamwork, incremental improvements – Michael Heller (SearchSecurity)
A variant of the Mirai IoT botnet is the suspected cause of distributed denial-of-service attacks on financial services companies earlier this year, according to Recorded Future.
3. Bold cloud computing plan puts Accenture on fast track – Jason Sparapani (SearchCIO)
Three years ago, the consulting company aimed to go from 10% in the public cloud to 90%. How’s it faring? Infrastructure chief Merim Becirovic tells in this interview.
4. Microsoft takes holistic approach to IoT security concerns – Trevor Jones (SearchCloudComputing)
Azure Sphere extends security from the cloud to the device. It’s the most holistic approach on the market and provides another example of Microsoft abandoning its insular past.
5. Twilio SIM cards simplify IoT connectivity – Jonathan Dame (SearchUnifiedCommunications)
Twilio has released programmable SIM cards for IoT connectivity. The Twilio SIM cards connect to the vendor’s APIs for voice, text and data communications.
Security image via FreeImages
Ransomware is finally king of the hill — albeit an ignominious hill — as the most prevalent type of malware found in attacks, according to the 2018 Verizon Data Breach Investigations Report.
1. Ransomware threat tops Verizon Data Breach Report – Michael Heller (SearchSecurity)
After years of climbing the ranks in the Verizon Data Breach Investigations Report, the ransomware threat has finally taken the top spot as the most prevalent malware type.
2. How IBM’s data science team quickens users’ AI projects – Ed Scannell (SearchDataCenter)
In this Q&A, IBM’s Seth Dobrin discusses the rising user interest in machine learning and AI projects and the help inexperienced users need to launch those projects.
3. Onus is on CIOs to address limitations of artificial intelligence – George Lawton (SearchCIO)
Recognizing the limitations of artificial intelligence is step No. 1 for CIOs aiming to reap its benefits, according to AI luminaries at the recent EmTech Digital conference.
4. With cloud IoT platform, AWS hopes to leap stubborn barriers – David Carty (SearchAWS)
During the last six months, AWS has demonstrated a commitment to IoT technology. But while it has a prominent place in the cloud IoT market, the technology doesn’t fit all needs.
5. Edge computing, cloud data centers drive decentralized IT – Eamon McCarthy Earls (SearchNetworking)
This week, bloggers explore the growth of decentralized IT, thanks to edge computing; Infoblox DNS security updates; and how machine learning is becoming a feature, rather than a product.
Networking image via FreeImages
Which SDN players do you think are making helpful advances, and is your organization using any of them? Check out IDC’s report on which SDN companies to look out for in this week’s roundup.
1. IDC names four SDN providers to watch – Jennifer English (SearchSDN)
An IDC report named Apstra, Big Switch Networks, Plexxi and Pluribus Networks as SDN providers to watch; Martello aims to validate Elfiq-based SD-WAN; and Aryaka extends to China.
2. IBM lures developers with AI and machine learning projects – Darryl Taft (SearchSoftwareQuality)
IBM open source projects help to facilitate the creation of machine learning apps and grow that developer base.
3. Misconfigured cloud storage leaves 1.5B files exposed – Michael Heller (SearchSecurity)
Researchers found misconfigured cloud storage across multiple platforms left huge amounts of data exposed, including medical information and payroll data.
4. Using Salesforce Social Studio, Harvard saw power of social media – Jesse Scardina (SearchSalesforce)
By looking at its social media data with Salesforce, Harvard Graduate School of Education saw more than 65% of article traffic generated through social media.
5. Report: Patient relationship management could curb readmissions – Scott Wallask (SearchHealthIT)
A new report from Chilmark Research points to future gains for patient relationship management software, potentially in the tricky area of decreasing 30-day patient readmissions.
Cyber image via FreeImages
What do you think of Atlanta’s incident response to the ransomware attack? Check out all of the details surrounding the cyberattack in this week’s roundup.
1. Five days after Atlanta ransomware attack, recovery begins – Michael Heller (SearchSecurity)
After battling the fallout from an Atlanta ransomware attack for five days, Mayor Keisha Bottoms said City Hall has finally begun to recover and turn systems back on.
2. Blockchain and cloud combine to fuel enterprise adoption – Kristin Knapp and David Carty (SearchCloudComputing)
While still in its infancy, blockchain technology is no longer theoretical. And cloud providers will line up to cash in, as enterprises plan to invest millions.
3. Rubrik backup for Nutanix HCI makes hospitals’ IT healthy – Dave Raffo (SearchStorage)
Hospitals’ IT team liked Nutanix hyper-converged infrastructure for primary storage so much, it went with converged backup vendor Rubrik for data protection.
4. MIT: Energy-efficient improves IoT encryption, authentication – Ben Cole (SearchCIO)
MIT researchers have developed an energy-efficient, hard-wired chip that they say will benefit IoT encryption and ease authentication processes in the IoT environment.
5. Progressive web apps drive mobile development of the future – Erica Mixon (SearchMobileComputing)
Progressive web apps offer many benefits, leading organizations to take advantage of this trend in mobile app dev. A lack of Apple support stands in the way for some, however.
Healthcare image via FreeImages
Cybersecurity in healthcare woes have worsened – find out why organizations are struggling to keep up with preventing cyberattacks in this week’s roundup.
1. Cybersecurity in healthcare ails from lack of IT talent – Shaun Sutner (SearchHealthIT)
Healthcare cybersecurity woes continue unabated, with more frequent cyberattacks amid a lack of IT talent and employee awareness, but organizations are spending more on security.
2. IBM cloud tools aim to woo the unconvinced – Trevor Jones (SearchCloudComputing)
IBM’s latest batch of cloud tools aims to help customers deploy and manage workloads on private and public clouds and keep the company at the center of their cloud strategies.
3. Tintri replication speeds agriculture firm’s backup, restore – Paul Crocetti (SearchDisasterRecovery)
Tintri hybrid flash enables Life-Science Innovations to complete replication every night for all of its servers. The system can provide a server restoration in seconds.
4. Firefox bug exposes passwords to brute force – for nine years – Peter Loshin (SearchSecurity)
A Firefox bug exposing the browser’s master password to a simple brute force attack against inadequate SHA-1 hashing is still on the books after nearly nine years.
5. ObjectRocket launches Azure MongoDB service – Jack Vaughan (SearchSQLServer)
Count ObjectRocket among those pursuing Azure MongoDB deployments. This open source NoSQL database continues to find traction on the web and in the cloud.
Application image via FreeImages
By James Kobielus (@jameskobielus)
Artificial intelligence (AI) is transforming practically every aspect of the application, analytics, data management, and IT infrastructure markets.
Increasingly, AI applications live in the cloud. As Wikibon found in our recent Big Data Analytics Trends and Forecast, the application industry’s transformation toward AI-first go-to-market models is well underway.
In this regard, here are the key trends that Wikibon sees disrupting the application market in the era of all-things AI:
- Old-line business analytics application vendors are losing momentum as the market migrates to comprehensive public cloud-based offerings that use AI to add value to data warehousing, data lakes, stream computing, in-memory cubes, real-time decision support, and other traditional enterprise applications.
- Established application vendors are migrating their solutions to public clouds that leverage the sophisticated AI APIs, libraries, and 24×7 managed services in those environments.
- Analytic app vendors are shifting their solution portfolios toward delivery of packaged AI applications that deliver fast industry or task-specific business outcomes.
- Application platforms are being architected for continued versionless feature evolution through continual refresh of the AI, metadata, rules, graphs, and other intelligent artifacts that have been deployed to the cloud edge.
- Application development tools are evolving toward a focus on building and orchestrating AI microservices in distributed environments, especially in mobility, robotics, sensor networks, and other edge scenarios.
- Big data catalogs are becoming the centerpiece of vendors’ data-lake platforms, enabling real-time curation, exploration, modeling, training, deployment, and governance of AI applications.
- AI-driven IT management tools are becoming commonplace, enabling 24×7 automated event monitoring, root-cause diagnostics, and predictive remediation of application, network, and system performance.
- More new enterprise application-development projects that come online involve building AI-driven smarts for deployment to mobile, embedded, and Internet of Things endpoints, as well as to massively parallel data centers, and domain-specific gateways.
- Increasingly, enterprises will be adopting AI-infused solutions as pre-built, pre-trained templatized cloud offerings that continuously and automatically adapt and tune themselves to deliver desired business outcomes.
For more depth on all of these trends, please check out the market study here.
Speech image via FreeImages
Thanks to AI, speech technology is now more than just speech-to-text dictation for note taking and documentation. But find out why enterprises may not be ready for the technology in this week’s roundup.
1. Ready for artificial intelligence in speech recognition? – Katherine Finnell (SearchUnifiedCommunications)
Artificial intelligence in speech recognition is transforming the technology, but are enterprises ready to employ these new tools within their operations?
2. EBay’s Elasticsearch hack consolidates Kubernetes monitoring – Beth Pariseau (SearchITOperations)
EBay made Kubernetes monitoring more flexible for developers and consistent for ops through modifications that are now part of Elastic’s Beats software.
3. IIC addresses industrial IoT security on endpoints – Sharon Shea (IoT Agenda)
In a new document, the Industrial Internet Consortium abridges IEC and NIST publications, offering clear, concise guidance to ensure IIoT security in connected plants.
4. Scrivito unveils serverless CMS product – Jesse Scardina (SearchContentManagement)
By building the CMS with ReactJS, Scrivito gained attraction with development community, according to an analyst.
5. Leaked report on AMD chips flaws raises ethical disclosure questions – Michael Heller (SearchSecurity)
Researchers announced AMD chip flaws without the coordinated disclosure procedure, and a leak of the research to a short seller has raised further suspicions about the process.