Enterprise IT Watch Blog


December 5, 2016  9:51 AM

TechTarget’s weekly roundup (11/28 – 12/5)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, Internet of Things, SAP, Storage

information-technologies-1242883-640x960
Information technology image via FreeImages

How will the 2017 technology trends affect your organization? Find out in this week’s roundup.

1. AI, IoT, intelligent systems take center stage in 2017 technology trends – Lauren Horwitz (SearchCRM)

Experts held forth on the promise and pitfalls of technologies that are changing today’s environment at the Gilbane conference.

2. Multicloud computing bliss not yet a reality for all IT shops – Kristin Knapp (SearchCloudComputing)

Experts predict that multicloud computing will be a top enterprise trend in 2017, but some cloud users question whether the touted benefits are worth the jump over significant IT management hurdles.

3. SAP unveils new IoT services to help derive business value from IoT – Jim O’Donnell (SearchSAP)

SAP has released three new IoT services to help manage IoT data and get value from it: SAP Connected Goods, SAP Dynamic Edge Processing and SAP IoT Application Enablement.

4. Last ditch Senate efforts fail to stall Rule 41 changes – Peter Loshin (SearchSecurity)

After a final push to delay changes to Rule 41 failed in the Senate, the U.S. government now has much wider authority to legally search computers whose location is unknown.

5. Hedge fund returns Nexsan storage to private ownership – Garry Kranz (SearchStorage)

Nexsan storage technology is the only moneymaking asset for publicly traded Imation. The vendor is going private with help from a Louisiana hedge fund.

December 2, 2016  4:10 PM

How the backlash against cognitive computing will play out in 2017

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, Cognitive computing, Data Science, Machine learning

SAMSUNG TECHWIN DIGIMAX-340

Science technology image via FreeImages

By James Kobielus (@jameskobielus)

Something tells me that 2017 will be a year of intense backlash in the world at large. Judging by the results and immediate aftermath of the US presidential election, the new year does not bode well for positive, uplifting visions in high-tech or any other sector of our society.

In the IT world, we’re already deep into a backlash cycle triggered by the disturbing incidence and likely impact of online “fake news” on the just-completed political campaign. Just as disturbing is the popular backlash against data science that was triggered by the epic failure of prominent data journalists to predict Donald Trump’s defeat of Hillary Clinton. Initial indications are that the incoming president-elect will single out left-leaning Silicon Valley types for special scorn.

If we consider popular culture as a whole, what are the most likely backlashes relevant to artificial intelligent, cognitive computing, machine learning, predictive analytics, and data science that we may see in 2017? Judging by recent experience, it’s a safe bet that we’ll see the following sorts of controversies throughout the year:

Clearly, none of these are new worries. It’s obviously too early to say which of these downsides will be most salient in the popular mind in 2017. Whether positive perceptions of AI and cognitive computing outweigh the negatives depends on how overall economic, political, and social trends play out in the year to come.

None will prove to be a showstopper to the spread and evolution of AI and cognitive computing in the world at large. But these themes, to varying degrees, will impact on people’s enthusiasm in embracing these innovations into the core of their lives.


November 28, 2016  11:32 AM

TechTarget’s weekly roundup (11/21 – 11/28)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, AWS, Cisco, cybersecurity, OpenStack

conference room

Conference image via FreeImages

What are you expecting from AWS re:Invent 2016? Find out what to look out for in this week’s roundup.

1. AWS re:Invent conference evolves, addresses growing pains – SearchAWS staff (SearchAWS)

AWS re:Invent 2016 promises a larger venue, more sessions and a focus on technologies like microservices and Lambda. Our experts look at how the conference has changed since 2012.

2. OpenStack enterprise adoption still awaits full embrace – Robert Gates (SearchDataCenter)

OpenStack in the enterprise is more likely to see continued adoption via vendor distributions and managed services, not the raw code of big name customers such as PayPal.

3. DHS hiring puts into question the cybersecurity skills shortage – Michael Heller (SearchSecurity)

A successful hiring event by the Department of Homeland Security calls into question the existence of the cybersecurity skills shortage but experts wonder if the event was an outlier.

4. Cisco patent infringement avoided, new Arista OSes OK to import – Antone Gonsalves (SearchNetworking)

Federal officials have cleared for importation to the U.S. Arista’s newer switches. U.S. trade officials had found older Arista products in violation of three Cisco patents.

5. Creative projects leave people guessing about future impact of AI – Ed Burns (SearchBusinessAnalytics)

A push is underway to write creative AI algorithms that can engage in music, film and design projects. So far, they have delivered mixed results.


November 21, 2016  10:47 AM

TechTarget’s weekly roundup (11/14 – 11/21)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Cisco, OpenStack, predictive modeling, SQL Server

voter-1519381-640x960
Election image via FreeImages

What went so wrong for election forecasters using predictive modeling? Find out in this week’s roundup.

1. How predictive modeling and forecasting failed to pick election winner – Ed Burns (SearchBusinessAnalytics)

Nearly all predictive modeling algorithms were way off in picking the winner of the presidential election. What went wrong can strike any predictive analytics project if data scientists and other analysts aren’t careful.

2. Cisco earnings show service providers are not buying – Antone Gonsalves (SearchNetworking)

The latest Cisco earnings show a drop in overall revenue, as service providers spend less. Cisco blames lower sales on macroeconomics.

3. Congress floats last-chance bill to delay Rule 41 changes – Peter Loshin (SearchSecurity)

Just two weeks before the deadline, U.S. lawmakers seek to postpone until next summer the acceptance of controversial updates to Rule 41, allowing legal access to unspecified systems.

4. Microsoft preview SQL Server on Linux, opens features across editions – Jack Vaughan (SearchSQLServer)

Microsoft looks to broaden the horizons of SQL Server, as it moves some Enterprise features to Standard Edition and introduces SQL Server on Linux.

5. OpenStack Newton storage features include data encryption – Carol Sliwa (SearchCloudStorage)

Storage updates in OpenStack’s Newton release include at-rest data encryption in Swift, a message API for async tasks in Cinder and driver-assisted migration in Manila.


November 14, 2016  9:46 AM

TechTarget’s weekly roundup (11/7 – 11/14)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
cybersecurity, DevOps, Health IT, Public Cloud

the-presidents-of-the-united-s-1236260-639x556
President image via FreeImages

How will the Trump presidency affect the future of health IT? Find out in this week’s roundup.

1. Experts say how Trump presidency might affect the future of health IT – Kristen Lee (SearchHealthIT)

What will the future of health IT be under the new Trump administration? Health IT experts offer up predictions and reassurances for the future.

2. Public cloud and big banks finally on the same page – Trevor Jones (SearchCloudComputing)

Public cloud and big banks weren’t always a good fit, but financial juggernauts have gone beyond their four walls to test the waters of hyperscale cloud computing.

3. At DOES 2016, important lessons from the DevOps journey – Valerie Silverthorne (SearchSoftwareQuality)

DevOps is a long method of small changes in culture and process. At DOES 2016, experts who are well along their way offer their best tips. Some may surprise you.

4. Post-election Russian hacker cyberattacks evade malware detection – Michael Heller (SearchSecurity)

A rash of spear phishing attacks by Russian hacker groups were seen following the presidential election this week, but antivirus and malware detection has been failing enterprises.

5. President-elect silent on federal cybersecurity policies – Eamon McCarthy Earls (SearchNetworking)

This week, bloggers look ahead to the new administration’s cybersecurity policies, how to close gaps in app delivery management and the best way to optimize data centers.


November 9, 2016  12:27 PM

Accelerating deep learning to superhuman proportions

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Algorithms, Deep learning

colour-math-function-1169753-640x480
Algorithm image via FreeImages

By James Kobielus (@jameskobielus)

Deep learning delivers extraordinary cognitive powers in the never-ending battle to distill sense from data at ever larger scales. But high performance doesn’t come cheap.

Deep learning relies on the application of multilevel neural-network algorithms to high-dimensional data objects. As such, it requires that fast-matrix manipulations in highly parallel architectures in order to identify complex, elusive patterns—such as objects, faces, voices, threats, etc.–amid big data’s “3 V” noise. As evidence for the technology’s increasingly superhuman cognitive abilities, check out research projects such as this that use it to put the Turing test to shame.

Extremely high-dimensional data is the bane of deep learning from a performance standpoint. That’s because crunching through high-dimensionality data is an exceptionally resource-intensive task, often consuming every last bit of available processors, memory, disk, and I/O thrown at it. Examples of the sorts of high-dimensional objects against which deep learning algorithms are usually applied include streaming media, photographic images, aggregated environmental feeds, rich behavioral data, and geospatial intelligence.

In data scientists’ attempts to algorithmically replicate the unfathomable intricacies of the mind, they must necessarily leverage the fastest chips, the largest clusters, and the most capacious interconnect bandwidth available to drive increasingly sophisticated deep learning algorithms. All of that assumes, of course, that these high-performance cluster-computing services are within their budgets.

What’s the optimal hardware substrate for deep learning? It would need to meet the following criteria. For high-dimensional deep learning to become more practical and pervasive, the underlying pattern-crunching hardware needs to become faster, cheaper, more scalable, and more versatile. Also, the hardware needs to become capable of processing data sets that will continue to grow in dimensionality as new sources are added, merged with other data, and analyzed by deep learning algorithms of greater sophistication. And the hardware—ranging from the chipsets and servers to the massively parallel clusters and distributed clouds—will need to keep crunching through higher-dimensionality data sets that also scale inexorably in volume, velocity, and variety.

Increasingly, many industry observers are touting graphical processing units (GPUs) are the ideal chipsets for deep learning. As discussed in this 2015 Wired article and this recent Data Science Central Post, GPUs–which were originally developed for video games and have high-performance math-processing features–may be far less hardware-intensive and less costly than general-purpose CPUs.

The Wired article mentions a Stanford researcher who used GPUs to “string together a three-computer system that could do the work of Google’s 1,000-computer cloud.” The article is also quick to point out that GPUs are pulling their deep-learning weight in production commercial and government applications, including as a complement to supercomputing resources at national laboratories. And it notes that some of the more intensive deep-learning algorithms are using GPUs to crunch through many billions of dimensions. The Data Science Central article, from a GPU hardware vendor, says that GPU technology is getting “smarter at a pace way faster than Moore’s Law,” though it offers none of the price-performance trend data needed to bolster that claim.

All of this raises the question of whether general-purpose CPUs have a future in high-performance deep learning. Some argue that general-purpose CPUs might continue to add value, either stand-alone or as a complement to GPUs, as long as they continue to improve in performance and to the extent that they’ve been optimized for high-performance, massively parallel clusters built on low-cost commodity hardware. Users such as Facebook are relying on GPU-based infrastructure to train their deep learning models, while also exploring new multi-core CPU chips that may approach the performance of GPUs in the near future.

A chipset-agnostic hybrid deep-learning hardware environment such as this may be the best approach, considering the vast range of specialized deep-learning applications and the likelihood that various hardware substrates will probably be optimized for diverse types of algorithmic analysis. In such a scenario, special-purpose “neural” chips, such as IBM SyNAPSE, may be incorporated for tasks for which neither GPUs not CPUs are optimal. FPGAs are also a credible option for deep learning.

Let’s leave quantum computing fabrics out of the discussion for now until they emerge from the laboratory suited for robust commercial deep-learning implementations. Deep learning needs serious acceleration in the here and now and shouldn’t pin its outsize performance requirements on unproven architectures that still have one foot in the lab.


November 7, 2016  10:24 AM

TechTarget’s weekly roundup (10/31 – 11/7)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Cisco, Microsoft, Oracle, Slack

e-sign-1243111-639x864
Security image via FreeImages

Did Microsoft downplay the Windows zero-day vulnerability? Find out why the company is coming under fire for its response in this week’s roundup.

1. Experts question Microsoft’s Windows zero-day response – Michael Heller (SearchSecurity)

A Windows zero-day disclosed by Google caught Microsoft between patch cycles, and experts questioned whether Microsoft downplayed the severity of the vulnerability.

2. Oracle IaaS has foothold with legacy shops, plays catch-up to AWS – Kristin Knapp (SearchCloudComputing)

Oracle hopes to rival public cloud giant AWS in the IaaS market, and while it could win over legacy Oracle shops, it needs to attract developers and non-Oracle users, too.

3. Cisco pledges a quicker rollout of network automation tools – Antone Gonsalves (SearchNetworking)

Cisco tells Partner Summit attendees it will move faster to deliver the network automation tools the vendor has been slow to provide.

4. Election Protection helps voters with call center technology – Jesse Scardina (SearchCRM)

Election Protection handles upward of 100,000 calls per day as Election Day nears, routing calls from high-call-volume states using Genesys.

5. Microsoft takes on Slack with new team chat app – Katherine Finnell (SearchUnifiedCommunications)

Microsoft has unveiled its new team chat app, Microsoft Teams. The app is built into Office 365 and looks to compete with Slack, Cisco Spark and Unify Circuit.


October 31, 2016  9:33 AM

TechTarget’s weekly roundup (10/24 – 10/31)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Apple, DevOps, SDN, Windows 10, Yahoo

pc-power-button-1515567-640x480
PC image via FreeImages

Windows PCs continue to dominate Macs in the business market. Find out why Apple is lagging behind in this week’s roundup.

1. Cost, applications thwart Macs in business – Ramin Edmond (SearchEnterpriseDesktop)

Windows PCs continue to outpace Macs in business. Cost, application options and Apple’s desktop and laptop strategy are all issues.

2. Yahoo, Verizon execs: Prepare your networking team for SDN and DevOps – Jennifer English (SearchSDN)

Enterprises need to prepare networking team members for SDN and DevOps, as the industry continues to change, Verizon and Yahoo said at the ONUG fall conference.

3. Details emerging on Dyn DNS DDoS attack, Mirai IoT botnet – Peter Loshin (SearchSecurity)

As more details emerge on last week’s massive Dyn DNS DDoS, new analysis indicated as few as 100,000 Mirai IoT botnet nodes were enlisted in the incident and reported attack rates up to 1.2 Tbps.

4. Users still kicking the tires on IBM’s cognitive applications – Ed Burns (SearchBusinessAnalytics)

There’s a huge amount of excitement around IBM’s cognitive computing tools, but users are still looking for ways to implement them in their businesses.

5. Microsoft Windows 10 event rings in immersive computing – Brian Holak (SearchCIO)

At this week’s Microsoft Windows 10 event, the company offered a glimpse of an immersive computing future. Also in Searchlight: AT&T, Time Warner and the future of 5G wireless services.


October 24, 2016  9:52 AM

TechTarget’s weekly roundup (10/17 – 10/24)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
DDOS, Dell EMC, Hyper-convergence, Internet of Things, salesforce

aerial-2-1209114-640x480
Technology future image via FreeImages

Are you ready for the latest trends in the IT industry? Find out how you can prepare in this week’s roundup.

1. These 10 trends in IT are rewriting the rules: Are you prepared? – Stephen J. Bigelow (SearchITOperations)

The rapid evolution of data centers and cloud hinges on these 10 key IT trends, from hybrid infrastructures to IoT and smartly managing capacity — and new IT roles to manage it all.

2. Dell EMC hyper-converged infrastructure power-up stirs new uses, users – Robert Gates (SearchDataCenter)

Adding Dell servers to its hyper-converged infrastructure lineup boosts scalability and reliability for more use cases, says Dell EMC, but some customers may not be easily swayed.

3. Salesforce IoT Cloud roadmap includes machine learning – Lauren Horwitz (SearchSalesforce)

Salesforce IoT Cloud will get a boost from Einstein features, including machine learning, to make predictions, which could help companies prevent product failures.

4. CIOs, your chief execs want help on digital business innovation – Jason Sparapani (SearchCIO)

Here’s an important dispatch from Symposium/ITXpo: Digital business innovation is at the heart of the CEO psyche; CIOs need to help.

5. Dyn hit by massive DNS DDoS, Eastern U.S. bears brunt of attacks – Peter Loshin (SearchSecurity)

At least two DNS DDoS attacks on Dyn are disrupting access to many popular websites, users and companies on the Eastern U.S. are impacted.


October 18, 2016  11:05 AM

Keeping a clear mind about the potential downsides of AI

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence

File Name : DSCN6318.JPG File Size : 149.8KB (153444 bytes) Date Taken : Sab, 2 ott 2004 16:13:55 Image Size : 1024 x 768 pixels Resolution : 300 x 300 dpi Bit Depth : 8 bits/channel Protection Attribute : Off Camera ID : N/A Camera : E885 Quality Mode : NORMAL Metering Mode : Matrix Exposure Mode : Programmed Auto Speed Light : Yes Focal Length : 8.0 mm Shutter Speed : 1/363.3 seconds Aperture : F7.6 Exposure Compensation : 0.0 EV White Balance : Auto Lens : Built-in Flash Sync Mode : Normal Exposure Difference : N/A Flexible Program : N/A Sensitivity : Auto Sharpening : Auto Image Type : Color Color Mode : N/A Hue Adjustment : N/A Saturation Control : Normal Tone Compensation : Auto Latitude(GPS) : N/A Longitude(GPS) : N/A Altitude(GPS) : N/A

Artifical image via FreeImages

By James Kobielus (@jameskobielus)

It’s not hard to grab your 15 minutes of attention in the mass media. All you need to do is argue that that the latest technological mania is going to ruin the world.

Alarmist warnings about artificial intelligence (AI) seem to be everywhere right now. I’m a bit jaded by all this sensationalism. Earlier this year I published my thoughts on this topic, in which I outlined the principal overheated arguments being made against AI and its data-driven cousin: cognitive computing. If you watched the otherwise excellent October 9 “CBS Sixty Minutes” episode on AI, you saw many of those arguments rehashed.

Now I’m seeing a new theme in the anti-AI backlash: the notion that growing reliance on data-driven cognitive computing will turn users into gibbering idiots. That’s essentially the thesis of Bernard Marr’s recent Forbes article, as flagged in the headline “Is Stupidity A Dangerous Side Effect Of Big-Data-Driven AI?” In the article itself, Marr softens that tone just a wee bit, using the term “de-skill” to refer to the process under which automating cognitive functions may cause people to forget how to handle them unassisted. But it’s clear that Marr believes the technology risks dumbing down AI-assisted tasks to the point at which people may become passive appendages to the machine (or, at the very least, to machine learning algorithms).

My feeling is that this is more of a red herring than a real issue. The fact that AI has made a specific mental task easier doesn’t imply that you, the person whose cognitive load is being lightened, are in danger of becoming an imbecile. We’ve been living with high-tech cognition offloaders—such as spreadsheets and electronic calculators–for the past couple of generations, but those don’t seem to have spawned mass mathematical illiteracy. People still need to master the same core concepts—addition, subtraction, division, multiplication, etc.–in order to use these tools correctly.

So before we let our fevered imaginations get the better of us, let’s consider how deeply entrenched AI has already become in our lives, and how little we need to fear it. If you consider the range of tasks for which AI is becoming ubiquitous, it’s obvious that none of this is reducing any of us to a state of drooling incoherence. AI’s principal applications so far have been for conversational chatbots, speech recognition, face recognition, image classification, natural language processing, computer vision, fraud detection, and environmental sensing. Though these AI-powered applications are everywhere—such as your smartphone–I’m pretty sure that most of us still have no problem speaking to human agents, understanding spoken language, or identifying a familiar face without technological spoonfeeding.

If anything, AI-enriched applications, appliances, the Internet of Things, and intelligent robotics are extending all of our senses. They’re honing our innate intuitions to a finer degree through immersive pattern sensing. And they’re empowering our neuromusculature in new ways, spurring all of us to evolve our organic smarts in order to embrace the amazing possibilities being unlocked.

Of course, it is quite possible that AI will be misapplied in many application contexts. That’s the essence of Marr’s arguments. He sketches out several speculative decision scenarios in which particular professionals may give AI-infused applications too much latitude:

  • AI-guided flight-automation systems may create a new generation of pilots who aren’t knowledgeable or attentive enough to manually override and fly the plane when they need to.
  • AI-driven autonomous vehicles may lessen people’s incentive to learn how to operate cars manually (assuming that this is even possible in future self-driving vehicles).
  • AI-powered medical systems may weaken physicians’ ability to render expert diagnostic judgments grounded in their own manual review of patient records.
  • AI-monitored manufacturing assembly lines may cause quality assurance specialists to ignore the evidence of their own senses, thereby contributing to a surge in defective products that enter the market.

Those scenarios are worth considering, but the more you ponder them the less likely they seem. Let’s consider each of the scenarios in turn:

  • Only insane pilots would rely unthinkingly on AI-guided navigation systems, considering that their own lives (not to mention those of their passengers and people on the ground) hang in the balance.
  • The same applies to the occupants in AI-steered autonomous vehicles, many of whom won’t enter such vehicles unless there’s a designated human driver who can assume controls in a pinch.
  • Likewise, few responsible doctors will take reckless risks with patients’ lives by delegating unthinkingly to AI-powered systems—or, at the very least, their malpractice insurers will rein in any such temptations.
  • And manufacturing quality control specialists will lose their jobs in no time if they sign off on too many AI-certified false-negatives at the tail end of the production process.

Marr states, rightly, that what would be most problematic is any AI-driven process that entirely lacks human oversight and control. But even in those instances (which in any likely future scenario will be the exception, not the rule), specific humans will still be held responsible for the results of algorithmically guided processes.

Real-world checks and balances will keep AI-powered processes from running amok. When circumstances demand it, heads will roll and AI-driven processes’ most adverse decision paths will be reined back in.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: