Enterprise IT Watch Blog

August 15, 2016  12:28 PM

TechTarget’s weekly roundup (8/8 – 8/15)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Data Center, Huawei, Open source

Airline image via FreeImages

What’s the lesson learned from Delta’s recent data center outage? Find out in this week’s roundup.

1. Delta outage raises backup data center, power questions – Robert Gates (SearchDataCenter)

Another outage at an airline data center offers yet another lesson about the need to fail over to a backup data center and bounce back quickly after a power problem.

2. White House aims to secure open source government programs – Michael Heller (SearchSecurity)

The White House unveils a new open source government policy and new research estimates the government’s zero-day exploit stockpile to be smaller than expected.

3. New Huawei Enterprise chief handicapped by politics, not products – Antone Gonsalves (SearchNetworking)

David He, newly appointed president at Huawei Enterprise U.S., is ready to turn over 100% of sales to partners. He is optimistic despite cloud of Chinese cyber spying.

4. Delta outage is a wake-up call for IT execs, CEOs – Brian Holak (SearchCIO)

The Delta outage isn’t the first DR-related debacle to strike a well-known organization and it won’t be the last. Also: Data theft on the rise; Intel the latest to buy AI startup.

5. NVMe over Fabrics gathers steam for flash and post-flash devices – Carol Sliwa (SearchSolidStateStorage)

Industry players are demonstrating the new NVM Express over Fabrics network interconnect technology, but it’s hard to say when it will gain widespread adoption.

August 9, 2016  3:35 PM

Grappling with first-world problems and data-fueled disruptions

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, Big Data

By James Kobielus (@jameskobielus)

Economic prosperity is the dream of every society. In the 21st century, we’re seeing wealth come to developing nations everywhere. It’s bringing long life expectancies, educational opportunities, and middle-class comforts to people who’ve never known any of this before.

Prosperity is also spreading advanced technologies far and wide. Though few societies are eager to return to pre-digital lifestyles, many people are uncomfortable with the rate of change, the mind-boggling complexities, and the unanticipated downsides of the technologically accelerated new world culture. This trend is stoking popular backlash against disruptive technologies such as big data analytics, cognitive computing, and artificial intelligence (AI).

Call these “first-world problems” if you will, but people everywhere have legitimate concerns about technology’s impacts on their cultures, communities, jobs, and private lives. It seems that more people are apt to portray technology as a prime scapegoat for all the bewildering forces reshaping their lives for better or worse. In other words, some people see technology as a “disruption” in the older, more pejorative sense of the term, rather than as a net boon for humanity in the more positive Silicon Valley spin.

If you’re in Silicon Valley, you should at least feel a bit nervous that some people regard your life’s work as the cause of their problems, rather than a path to a better, brighter future. Speaking of the Valley, one of its primary thought leaders, futurist Tim O’Reilly, recently sounded the alarm on this issue. In a recent interview, he stated the following: “What I’ve noticed is people increasingly blame technology, whether it’s gentrification in San Francisco, or the fear of rogue AI, or the working conditions of the on-demand economy. Tech is increasingly being painted as a villain.”

O’Reilly is a big booster of AI, but I couldn’t help noticing that this is the only disruptive technology he specifically singles out as the source of popular apprehension. If you regard the term “AI” as a catch-all that includes cognitive computing and big-data analytics, I agree with him that the industry needs to be sensitive to these concerns. In fact, I stated as much in this recent TechTarget column, with respect to the potential for AI-driven systems to invade privacy, be weaponized, addict emotionally vulnerable people, and otherwise contribute to undesirable societal consequences.

One of the things I found interesting in O’Reilly’s discussion was the notion that popular sentiment is constantly toggling between dystopian and utopian visions of AI’s disruptive potential. He attaches the pithy name of “WTF economy” to this bipolarity. “WTF is a great phrase,” he says, “because it can be an expression of wonder, or it can be an expression of dismay or disgust.”

To accentuate the positive pole of this vision, O’Reilly proposes what he calls the “Next:Economy” paradigm. This is a vaguely socialistic scenario in which AI-fueled technological innovations drive greater process automation throughout the economy while at the same time fostering greater human “augmentation.” This is the utopian vision of an online economy in which a never-ending flow of frictionless, on-demand, algorithmic transactions makes everybody richer, smarter, more productive, more creative, and fulfilled. In this vision, “companies…have more than profit at the heart of their model. They have a societal benefit.”

In an article earlier this year, O’Reilly hints vaguely at guidance for societal movers and shakers who seek to bring this data-driven utopia to fruition. However, he gives no indication as to how one might use AI or any other technological enabler to ensure that an organization’s business model can generate a never-ending stream of “societal benefit”–apart from the usual advantages that flow from a vibrant, innovative, and free marketplace (with or without AI).

I’m not philosophically opposed to O’Reilly vision. I agree with him on the potential for data-driven technologies to help national and regional economies to move in this direction. But if you’re a working technology professional, it can be hard to identify what, if anything, you should be doing differently to respond to these popular concerns regarding disruption (in the negative sense).

Near as I can tell, O’Reilly’s vision seems to be calling for such technological enablers as cloud-first business platforms, open data, agile collaboration systems, loosely coupled microservices, data-driven next best actions, self-service personalization, and experience optimization. However, many companies have already invested heavily in those and other technologies as the building blocks of their digital business models. Many of those same organizations have also taken privacy, security, governance, and risk compliance mandates to heart and enforce them on an enterprise-wide basis.

It seems to me that, taken to its logical extreme, O’Reilly’s vision calls for some sort of algorithmic resource that calculates societally optimal outcomes and drives orchestrated next-best-action scenarios to deliver those outcomes automatically and universally. And that, in turn, would presuppose some sort of societal regulatory regime for defining what those societally sanctioned outcomes might be.

But I doubt that O’Reilly would actually take it to that extreme. His vision is actually more “invisible hand” in its emphasis on ensuring that online marketplaces are structured to achieve these outcomes without need for state intervention or heavy-handed regulation.
And that’s the proper orientation. As societies across the planet join the so-called “first world,” they will all evolve their economies toward this algorithmically driven model. As very different national cultures move in this common direction, we shouldn’t dismiss people’s fears surrounding the disruptions, dislocations, and disorientations that accompany this migration.

But we shouldn’t buy into the alarmist notion that somehow “technology” in the abstract is the source of these problems or that some societies will inevitably suffer in the process. As the world economy races more deeply into the economic fabric of the 21st century, each society must find its own way of ensuring that its people benefit from this trend to the maximum extent feasible.

August 1, 2016  9:44 AM

TechTarget’s weekly roundup (7/25 – 8/1)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Citrix, cybersecurity, Oracle, Verizon, Yahoo

Money in the form of many large bills
Purchase image via FreeImages

What do you make of Verizon’s purchase of Yahoo? Find out how the company plans to make its mark in the digital content market in this week’s roundup.

1. Verizon purchase of Yahoo a risky bid for digital content – Brian Holak (SearchCIO)

With the Verizon purchase of Yahoo, the telecommunications company hopes to break into the cutthroat business of digital content, but challenges await. Also: Oracle invests in cloud; more Microsoft layoffs.

2. Oracle cloud ERP gains ground with planned $9.3 billion purchase of NetSuite – Jack Vaughan (SearchOracle)

The Oracle cloud ERP chase could gain speed, thanks to a $9.3B plan to buy cloud applications vendor NetSuite. The software giant’s timing may be good, as more users look to the cloud for ERP deployments.

3. Microsoft Stream marks major push in business video – Antone Gonsalves (SearchUnifiedCommunications)

Microsoft Stream, backed by the vendor’s marketing power, is expected to draw more enterprises into the business video market.

4. White House unveils federal cybersecurity plan and attack rating system – Michael Heller (SearchSecurity)

The White House’s new federal cybersecurity plan outlines the responsibilities of each agency in a cyberattack and creates a rating system to determine the severity of an attack.

5. Citrix GoTo joins LogMeIn as housecleaning continues – Ramin Edmond (SearchVirtualDesktop)

Citrix GoTo will merge with remote desktop vendor LogMeIn, so Citrix can devote more resources to its core application delivery, networking and mobility products.

July 25, 2016  9:30 AM

TechTarget’s weekly roundup (7/18 – 7/25)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, CDO, Data privacy, DDOS

Data image via FreeImages

Do you think the chief data officer role is overblown? Find out why the CDO role is in a current state of flux in this week’s roundup.

1. The chief data officer’s dilemma — CDO role in flux – Jack Vaughan (SearchDataManagement)

How to balance data safety with innovative big data expansion was at issue at an MIT symposium where the chief data officer role was considered.

2. Data privacy in the spotlight with Privacy Shield, Microsoft – Trevor Jones (SearchCloudComputing)

Data privacy continues to be a hot-button issue on both sides of the Atlantic, with the Privacy Shield agreement and a big win for Microsoft providing some clarity to this still murky issue.

3. DNS DDoS attack shuts down Liberty of Congress websites for three days – Michael Heller (SearchSecurity)

A DNS DDoS attack hit the Library of Congress, disrupting various Library services and websites for three days before IT staff was able to restore normal functionality.

4. New AT&T Network on Demand service provides Cisco virtual router, more – Antone Gonsalves (SearchNetworking)

The latest AT&T Network on Demand service provides virtualized versions of Cisco or Juniper routers, Fortinet firewalls or Riverbed WAN optimization technology.

5. The future of AI apps will be delivery as a service – Ed Burns (SearchBusinessAnalytics)

AI systems are generating huge hype right now, which makes it imperative for businesses to understand how the technology can be deployed most effectively.

July 21, 2016  9:13 AM

Surmounting huge hurdles to algorithmic accountability

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Algorithm image via FreeImages

By James Kobielus (@jameskobielus)

Algorithms are a bit like insects. Most of the time, we’re content to let them buzz innocuously in our environment, pollinating our garden and generally going about their merry business.

Under most scenarios, algorithms are helpful little critters. Embedded in operational applications, they make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings, encroaching on your privacy or perhaps targeting you with a barrage of objectionable solicitations, your first impulse may be to swat back in anger.

That image came to mind as I pondered the new European Union (EU) regulation that was discussed by Cade Metz in this recent Wired article. Due to take effect in 2018, the General Data Protection Regulation prohibits any “automated individual decision-making” that “significantly affects” EU citizens. Specifically, it restricts any algorithmic approach that factors a wide range of personal data—including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions.

Considering how pervasive algorithmic processes are in everybody’s lives, this sort of regulation might encourage more people to retaliate against the occasional nuisance using legal channels. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision.

Now that’s definitely a tall order to fill. The regulation’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity–coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms—be they machine learning, convolutional neural networks, or whatever–are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Throwing more decision scientists at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. As the cited article states, “Explaining what goes on inside a neural network is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. You can’t easily trace their precise path to a final answer.”

Algorithmic accountability is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

For example, this recent article outlines the challenges that Facebook faces in logging, aggregating, correlating, and analyzing all the decision-automation variables relevant to its troubleshooting, e-discovery, and other real-time operational requirements. In Facebook’s case, the limits of algorithmic accountability are clearly evident in the fact that, though it stores low-level messaging traffic in HDFS, this data can only be replayed for “up to a few days.”

Now imagine that decision-automation experts are summoned to replay the entire narrative surrounding a particular algorithmic decision in a court of law, even in environments less complex than Facebook’s. In such circumstances, a well-meaning enterprise may risk serious consequences if a judge rules against its specific approach to algorithmic decision automation. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding. Most of the people you’re trying to explain this stuff to may not know a machine-learning algorithm from a hole in the ground.

More often than we’d like to believe, there will be no single human expert–or even (irony alert) algorithmic tool–that can frame a specific decision-automation narrative in simple, but not simplistic, English. Check out this post from last year, in which I discuss the challenges of automating the generation of complex decision-automation narratives.

Even if you could replay automated decisions from in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made. Check out this recent article by Michael Kassner for an excellent discussion of the challenge of independent algorithmic verification.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance—such as a legal proceeding, contractual dispute, or showstopping technical glitch—will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory. As Facebook’s Yann LeCun stated in this presentation, recurrent neural networks “cannot remember things for very long”—typically holding “thought vector” data structures in memory for no more than 20 seconds during runtime.

In other words, algorithms, just like you and me, may have limited attention spans and finite memories. Their bias is in-the-moment action. Asking them to retrace their exact decision sequence at some point in the indefinite future is a bit like asking you or me to explain why we used a particular object to swat a particular mosquito nine months ago.

July 18, 2016  8:50 AM

TechTarget’s weekly roundup (7/11 – 7/18)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Authentication, Azure, Cisco, Docker, Storage

Security image via FreeImages

Are you concerned about Google’s OAuth authentication system after the Pokemon GO controversy? Check out how the flaw was fixed in this week’s roundup.

1. Pokemon GO reveals full account access flaw for Google authentication – Peter Loshin (SearchSecurity)

The wildly popular Pokémon GO mobile game obtained a full account access token to iOS users’ Google accounts, revealing a major issue with Google’s OAuth authentication system.

2. CEO Robbins wants more Cisco applications, cloud services – Antone Gonsalves (SearchNetworking)

CEO Robbins said at the Live conference customers should expect a much higher percentage of future products to be delivered as Cisco applications or cloud services.

3. Microsoft delays Azure Stack, will sell it only through OEMs – Ed Scannell and Trevor Jones (SearchCloudComputing)

Waiting another six months for Microsoft’s Azure Stack will be a hiccup to some IT shops, but narrowing their hardware choices for it will be far less welcomed.

4. Pokemon Go craze: A harbinger of the augmented enterprise? – Francesca Sales (SearchCIO)

Pokemon Go is pushing augmented reality closer to enterprise adoption. Also on Searchlight, Microsoft wins in warrant case; Google hit with new charges.

5. StorageOS gives your Docker volume persistent container storage – Garry Kranz (SearchStorage)

Startup StorageOS launches software-defined storage for containers. Development teams choose a targeted Docker volume in a container and provision storage and data services.

July 11, 2016  9:32 AM

TechTarget’s weekly roundup (7/4 – 7/11)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Antivirus, Brexit, Polycom, Rackspace, Red Hat

Virus image via FreeImages

What do you think about the state of the antivirus market? Check out why Avast’s purchase of AVG could mean trouble in this week’s roundup.

1. Avast purchase of AVG shows uncertainty in antivirus market – Michael Heller (SearchSecurity)

Avast purchased competitor AVG to improve consumer and enterprise products, but one expert said the purchase price proves the antivirus market may be on the decline.

2. Polycom acquisition reverses course with new offer – Katherine Finnell (SearchUnifiedCommunications)

The Polycom-Mitel merger is dead, as Polycom accepts a cash offer from Siris Capital Group. The new deal would make Polycom a private provider of video conferencing services.

3. Rackspace becomes Microsoft cloud distributor, recruits partners – John Moore (SearchCloudProvider)

Rackspace is recruiting channel partners to resell and refer Azure and Office 365 as the company expands its role in Microsoft’s Cloud Solution Provider program.

4. Red Hat reveals vision for more architecture-driven BPM models – George Lawton (SearchSOA)

BPM and DevOps have embraced the same goals from different sides of the organization. Now, Red Hat is trying to close the gap between developers and business process creation.

5. Brexit strategy: CIO Planning for UK-EU split – Jason Sparapani (SearchCIO)

For CIOs, the UK-EU split opens up a continent of unknowns. Also in Searchlight: Cisco locks on to security startup CloudLock; IBM woos blockchain coders; $4 smartphones hit India.

July 5, 2016  9:42 AM

TechTarget’s weekly roundup (6/27 – 7/4)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Brexit, Intel, RHEL, SAP, Video conferencing

Security image via FreeImages

Are you surprised Intel is looking into selling off its security business? Check out some of the possible reasons behind the move in this week’s roundup.

1. Intel reportedly considering selling its security business – Michael Heller (SearchSecurity)

New reports suggest Intel may be looking into selling off its security business, and experts are unclear whether it means Intel’s McAfee acquisition has gone sour.

2. Video conferencing market growth slowed by infrastructure sales – Katherine Finnell (SearchUnifiedCommunications)

In UC news, an industry report predicts slow growth in the video conferencing market, while a study finds SMBs are embracing cloud-based tools faster than larger enterprises.

3. RHEL 8 promises relief from dependency hell, more integration – Meredith Courtemanche (SearchDataCenter)

With RHEL hitting platform maturity, Red Hat’s future includes Microsoft integration, expanded management and a smaller footprint.

4. Buy SAP again? 60% of customers say no, says Nucleus Research – Jim O’Donnell (SearchSAP)

A new report from Nucleus Research says six out of 10 SAP customers would not buy SAP products again, and even in the core ERP market, nine out of 10 won’t consider S/4HANA.

5. Brexit strategy: CIO planning for UK-EU split – Jason Sparapani (SearchCIO)

For CIOs, the UK-EU split opens up a continent of unknowns. Also in Searchlight: Cisco locks on to security startup CloudLock; IBM woos blockchain coders; $4 smartphones hit India.

June 28, 2016  3:35 PM

Accentuating the positive vision of cognitive computing’s potential

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, Cognitive computing

Cognitive image via FreeImages

By James Kobielus (@jameskobielus)

Popular anxieties have lives of their own. For whatever reason, many people are unsettled by artificial intelligence. Perhaps this is due to the fact that AI—and more broadly, cognitive computing–has recently been evolving with breathtaking speed from a niche technology to a pervasive force in every aspect of our lives.

To buoy hopes and allay the fears that surround this technology, you need a balanced vision of its future impacts on society. Whatever vision you proclaim should accentuate the positive potential of cognitive computing while highlighting its risks and addressing how those might be mitigated. If you’re one of my regular readers, you know that I’m enthusiastic about the myriad valuable applications of cognitive computing systems such as IBM Watson. Rather that recapitulate my entire bibliography on this topic, I’ll just point you to this InfoWorld column from late last year.

No, I’m not nervous about the risk of cognitive computing running amok. However, I like to think that I’m no Pollyanna. I’m not under the illusion that humanity will always use cognitive computing or any other technology for good. In that regard, check some of my recent thinking on what you might call the “dark sides” of this technology. Among other discussions, I’ve dissected the potential of cognitive systems to be weaponized, addict emotionally vulnerable people, and inadvertently interpret non-existent visual patterns in blurry images.

I’m particularly leery of unhinged sci-fi-stoked fantasies that have no grounding in the reality of how cognitive computing is being used or is likely to be incorporated into our lives. That’s why I came down hard late last year on the “bogus bogeyman of the brainiac robot overlord,” which referred to the absurd notion that humanity is danger of being enslaved by superintelligent robots.

What exactly is freaking out the general public regarding the potential downsides of this technology? I’ll summarize the chief anxieties as follows, arranging these in a spectrum from mildly unsettling to thoroughly unhinged:

  • Helplessness: This is the belief that cognitive apps have already entrenched themselves in so many roles in our lives that it’s too late to put the genie and its potential adverse consequences back in the bottle.
  • Bewilderment: This is the feeling that cognitive technologies are so complex and evolving so fast that nobody, not even the top experts, understand how it all works or can rein it all in if it happens to spin out of control.
  • Deprecation: This is the sense that that organic human intelligence will no longer seem special as general-purpose cognitive computing becomes agile enough to solve problems in any sphere of knowledge.
  • Usurpation: This is the worry that cognitive-powered programs, capable of natural language communication and empathetic engagement, enable systems to simulate “human touch” with actual authentic humans in the loop.
  • Unemployment: This is the Luddite-grade anxiety that cognitive systems will automate jobs at every skill level and pay scale, resulting in massive layoffs as careers and industries disappear seemingly overnight.
  • Impersonation: This is science-fiction-fueled panic about the possibility of cognitive systems, aided and abetted by self-fabricating smart materials, spawning a new race of “Terminator”-like humanoids that can pass for human and replicate themselves at will.
  • Annihilation: This is the cartoonish nightmare about cognitive-powered robots conspiring among themselves to destroy us, or possibly under the direction of a criminal human mastermind.

Clearly, there’s a broad ecosystem of popular naysaying that surrounds cognitive computing. However, it would be inappropriate to polarize the climate of opinion into distinct “for” or “against” camps. Many people have mixed thoughts on the topic. Well-informed people know that the technology drives many positive innovations, but they also harbor justified misgivings about its potential abuse.

A perfect case in point is Elon Musk, who has not been shy in voicing his opinions regarding the potential downsides of AI in society. As the founder of Tesla and SpaceX, Musk is obviously passionate about advanced technologies, has a futuristic vision about their transformative potential, and puts ample money where his mouth is. In addition to running his transportation-related businesses, Musk has also recently funded a global research program with the vague goal of “keeping AI beneficial to humanity.”

Getting down to brass tacks, Musk also recently co-founded OpenAI, which describes itself as a “non-profit artificial intelligence research company [that has the goal of advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The group’s initial projects are focused on the technology’s use in household robots, intelligent personal advisors, and virtual mini-world gaming environments.

We should note that many people who harp on cognitive computing’s downsides also hope to make money from the technology’s incorporation into their companies’ products and services. That’s certainly the case with Musk, and for Bill Gates as well. And there’s nothing wrong with that.

If you’re passionate about cognitive computing, you’ll want to defend it against those who attack it on the basis of ill-informed, unbalanced, speculative, conspiratorial, and dystopic fantasies about its potential for misuse. That, in turn, demands redoubled vigilance at mythbusting the most unwarranted negative beliefs about these technologies.

If you’re on the same page as me on this matter, I urge you to check out this article I published last year that busts several prevailing myths about cognitive computing.

June 27, 2016  9:59 AM

TechTarget’s weekly roundup (6/20 – 6/27)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Dell, EMC, privacy

Privacy image via FreeImages

Where do you stand on the Rule 41 changes? Check out the viewpoints of each side in this week’s roundup.

1. Activists, DOJ spar over Rule 41 changes to enhance FBI searchers – Peter Loshin (SearchSecurity)

EFF and privacy activists oppose Rule 41 changes, while the Department of Justice claims the changes do not alter ‘traditional protections’ under the Fourth Amendment.

2. Dell software biz jettisoned to advance EMC buy, users’ views mixed – Ed Scannell and Robert Gates (SearchDataCenter)

Handing off most of its software business to private equity firms is one more step toward Dell’s mega-purchase of EMC, and it gives users both clarity and concerns.

3. Potential PC replacements poised for enterprise prominence – Eddie Lockhart (SearchEnterpriseDesktop)

The PC has long been the top dog in the enterprise, but new, inexpensive devices such as the Raspberry Pi and Google’s Chromebook could threaten its title.

4. Pros, newbies seek new answers to cloud questions at Cloud Expo – Jason Sparapani (SearchCIO)

Consultants and job seekers joined practitioners at the Cloud Expo conference with cloud questions on topics ranging from container storage to the internet of things.

5. End of Dell Cloud Manager shows slow growth in multicloud – Trevor Jones (SearchCloudComputing)

Shuttering Dell Cloud Manager, an early piece of Dell’s move to trusted advisor in public cloud, underscores that the multicloud market has been more hype than reality — at least so far.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: