Enterprise IT Watch Blog


July 25, 2016  9:30 AM

TechTarget’s weekly roundup (7/18 – 7/25)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, CDO, Data privacy, DDOS

usb-2-1243297-639x651
Data image via FreeImages

Do you think the chief data officer role is overblown? Find out why the CDO role is in a current state of flux in this week’s roundup.

1. The chief data officer’s dilemma — CDO role in flux – Jack Vaughan (SearchDataManagement)

How to balance data safety with innovative big data expansion was at issue at an MIT symposium where the chief data officer role was considered.

2. Data privacy in the spotlight with Privacy Shield, Microsoft – Trevor Jones (SearchCloudComputing)

Data privacy continues to be a hot-button issue on both sides of the Atlantic, with the Privacy Shield agreement and a big win for Microsoft providing some clarity to this still murky issue.

3. DNS DDoS attack shuts down Liberty of Congress websites for three days – Michael Heller (SearchSecurity)

A DNS DDoS attack hit the Library of Congress, disrupting various Library services and websites for three days before IT staff was able to restore normal functionality.

4. New AT&T Network on Demand service provides Cisco virtual router, more – Antone Gonsalves (SearchNetworking)

The latest AT&T Network on Demand service provides virtualized versions of Cisco or Juniper routers, Fortinet firewalls or Riverbed WAN optimization technology.

5. The future of AI apps will be delivery as a service – Ed Burns (SearchBusinessAnalytics)

AI systems are generating huge hype right now, which makes it imperative for businesses to understand how the technology can be deployed most effectively.

July 21, 2016  9:13 AM

Surmounting huge hurdles to algorithmic accountability

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Algorithms

laser-game-1171080-638x441
Algorithm image via FreeImages

By James Kobielus (@jameskobielus)

Algorithms are a bit like insects. Most of the time, we’re content to let them buzz innocuously in our environment, pollinating our garden and generally going about their merry business.

Under most scenarios, algorithms are helpful little critters. Embedded in operational applications, they make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings, encroaching on your privacy or perhaps targeting you with a barrage of objectionable solicitations, your first impulse may be to swat back in anger.

That image came to mind as I pondered the new European Union (EU) regulation that was discussed by Cade Metz in this recent Wired article. Due to take effect in 2018, the General Data Protection Regulation prohibits any “automated individual decision-making” that “significantly affects” EU citizens. Specifically, it restricts any algorithmic approach that factors a wide range of personal data—including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions.

Considering how pervasive algorithmic processes are in everybody’s lives, this sort of regulation might encourage more people to retaliate against the occasional nuisance using legal channels. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision.

Now that’s definitely a tall order to fill. The regulation’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms’ seeming anonymity–coupled with their daunting size, complexity, and obscurity–presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms—be they machine learning, convolutional neural networks, or whatever–are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.

Throwing more decision scientists at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. As the cited article states, “Explaining what goes on inside a neural network is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. You can’t easily trace their precise path to a final answer.”

Algorithmic accountability is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.

For example, this recent article outlines the challenges that Facebook faces in logging, aggregating, correlating, and analyzing all the decision-automation variables relevant to its troubleshooting, e-discovery, and other real-time operational requirements. In Facebook’s case, the limits of algorithmic accountability are clearly evident in the fact that, though it stores low-level messaging traffic in HDFS, this data can only be replayed for “up to a few days.”

Now imagine that decision-automation experts are summoned to replay the entire narrative surrounding a particular algorithmic decision in a court of law, even in environments less complex than Facebook’s. In such circumstances, a well-meaning enterprise may risk serious consequences if a judge rules against its specific approach to algorithmic decision automation. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding. Most of the people you’re trying to explain this stuff to may not know a machine-learning algorithm from a hole in the ground.

More often than we’d like to believe, there will be no single human expert–or even (irony alert) algorithmic tool–that can frame a specific decision-automation narrative in simple, but not simplistic, English. Check out this post from last year, in which I discuss the challenges of automating the generation of complex decision-automation narratives.

Even if you could replay automated decisions from in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made. Check out this recent article by Michael Kassner for an excellent discussion of the challenge of independent algorithmic verification.

Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance—such as a legal proceeding, contractual dispute, or showstopping technical glitch—will compel impacted parties to revisit those automated decisions.

And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory. As Facebook’s Yann LeCun stated in this presentation, recurrent neural networks “cannot remember things for very long”—typically holding “thought vector” data structures in memory for no more than 20 seconds during runtime.

In other words, algorithms, just like you and me, may have limited attention spans and finite memories. Their bias is in-the-moment action. Asking them to retrace their exact decision sequence at some point in the indefinite future is a bit like asking you or me to explain why we used a particular object to swat a particular mosquito nine months ago.


July 18, 2016  8:50 AM

TechTarget’s weekly roundup (7/11 – 7/18)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Authentication, Azure, Cisco, Docker, Storage

non-contact-security-archway-1565572-640x480
Security image via FreeImages

Are you concerned about Google’s OAuth authentication system after the Pokemon GO controversy? Check out how the flaw was fixed in this week’s roundup.

1. Pokemon GO reveals full account access flaw for Google authentication – Peter Loshin (SearchSecurity)

The wildly popular Pokémon GO mobile game obtained a full account access token to iOS users’ Google accounts, revealing a major issue with Google’s OAuth authentication system.

2. CEO Robbins wants more Cisco applications, cloud services – Antone Gonsalves (SearchNetworking)

CEO Robbins said at the Live conference customers should expect a much higher percentage of future products to be delivered as Cisco applications or cloud services.

3. Microsoft delays Azure Stack, will sell it only through OEMs – Ed Scannell and Trevor Jones (SearchCloudComputing)

Waiting another six months for Microsoft’s Azure Stack will be a hiccup to some IT shops, but narrowing their hardware choices for it will be far less welcomed.

4. Pokemon Go craze: A harbinger of the augmented enterprise? – Francesca Sales (SearchCIO)

Pokemon Go is pushing augmented reality closer to enterprise adoption. Also on Searchlight, Microsoft wins in warrant case; Google hit with new charges.

5. StorageOS gives your Docker volume persistent container storage – Garry Kranz (SearchStorage)

Startup StorageOS launches software-defined storage for containers. Development teams choose a targeted Docker volume in a container and provision storage and data services.


July 11, 2016  9:32 AM

TechTarget’s weekly roundup (7/4 – 7/11)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Antivirus, Brexit, Polycom, Rackspace, Red Hat

bug-pc-virus-1242774-640x480
Virus image via FreeImages

What do you think about the state of the antivirus market? Check out why Avast’s purchase of AVG could mean trouble in this week’s roundup.

1. Avast purchase of AVG shows uncertainty in antivirus market – Michael Heller (SearchSecurity)

Avast purchased competitor AVG to improve consumer and enterprise products, but one expert said the purchase price proves the antivirus market may be on the decline.

2. Polycom acquisition reverses course with new offer – Katherine Finnell (SearchUnifiedCommunications)

The Polycom-Mitel merger is dead, as Polycom accepts a cash offer from Siris Capital Group. The new deal would make Polycom a private provider of video conferencing services.

3. Rackspace becomes Microsoft cloud distributor, recruits partners – John Moore (SearchCloudProvider)

Rackspace is recruiting channel partners to resell and refer Azure and Office 365 as the company expands its role in Microsoft’s Cloud Solution Provider program.

4. Red Hat reveals vision for more architecture-driven BPM models – George Lawton (SearchSOA)

BPM and DevOps have embraced the same goals from different sides of the organization. Now, Red Hat is trying to close the gap between developers and business process creation.

5. Brexit strategy: CIO Planning for UK-EU split – Jason Sparapani (SearchCIO)

For CIOs, the UK-EU split opens up a continent of unknowns. Also in Searchlight: Cisco locks on to security startup CloudLock; IBM woos blockchain coders; $4 smartphones hit India.


July 5, 2016  9:42 AM

TechTarget’s weekly roundup (6/27 – 7/4)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Brexit, Intel, RHEL, SAP, Video conferencing

security-alarm-1312649-640x960
Security image via FreeImages

Are you surprised Intel is looking into selling off its security business? Check out some of the possible reasons behind the move in this week’s roundup.

1. Intel reportedly considering selling its security business – Michael Heller (SearchSecurity)

New reports suggest Intel may be looking into selling off its security business, and experts are unclear whether it means Intel’s McAfee acquisition has gone sour.

2. Video conferencing market growth slowed by infrastructure sales – Katherine Finnell (SearchUnifiedCommunications)

In UC news, an industry report predicts slow growth in the video conferencing market, while a study finds SMBs are embracing cloud-based tools faster than larger enterprises.

3. RHEL 8 promises relief from dependency hell, more integration – Meredith Courtemanche (SearchDataCenter)

With RHEL hitting platform maturity, Red Hat’s future includes Microsoft integration, expanded management and a smaller footprint.

4. Buy SAP again? 60% of customers say no, says Nucleus Research – Jim O’Donnell (SearchSAP)

A new report from Nucleus Research says six out of 10 SAP customers would not buy SAP products again, and even in the core ERP market, nine out of 10 won’t consider S/4HANA.

5. Brexit strategy: CIO planning for UK-EU split – Jason Sparapani (SearchCIO)

For CIOs, the UK-EU split opens up a continent of unknowns. Also in Searchlight: Cisco locks on to security startup CloudLock; IBM woos blockchain coders; $4 smartphones hit India.


June 28, 2016  3:35 PM

Accentuating the positive vision of cognitive computing’s potential

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Artificial intelligence, Cognitive computing

OLYMPUS DIGITAL CAMERA
Cognitive image via FreeImages

By James Kobielus (@jameskobielus)

Popular anxieties have lives of their own. For whatever reason, many people are unsettled by artificial intelligence. Perhaps this is due to the fact that AI—and more broadly, cognitive computing–has recently been evolving with breathtaking speed from a niche technology to a pervasive force in every aspect of our lives.

To buoy hopes and allay the fears that surround this technology, you need a balanced vision of its future impacts on society. Whatever vision you proclaim should accentuate the positive potential of cognitive computing while highlighting its risks and addressing how those might be mitigated. If you’re one of my regular readers, you know that I’m enthusiastic about the myriad valuable applications of cognitive computing systems such as IBM Watson. Rather that recapitulate my entire bibliography on this topic, I’ll just point you to this InfoWorld column from late last year.

No, I’m not nervous about the risk of cognitive computing running amok. However, I like to think that I’m no Pollyanna. I’m not under the illusion that humanity will always use cognitive computing or any other technology for good. In that regard, check some of my recent thinking on what you might call the “dark sides” of this technology. Among other discussions, I’ve dissected the potential of cognitive systems to be weaponized, addict emotionally vulnerable people, and inadvertently interpret non-existent visual patterns in blurry images.

I’m particularly leery of unhinged sci-fi-stoked fantasies that have no grounding in the reality of how cognitive computing is being used or is likely to be incorporated into our lives. That’s why I came down hard late last year on the “bogus bogeyman of the brainiac robot overlord,” which referred to the absurd notion that humanity is danger of being enslaved by superintelligent robots.

What exactly is freaking out the general public regarding the potential downsides of this technology? I’ll summarize the chief anxieties as follows, arranging these in a spectrum from mildly unsettling to thoroughly unhinged:

  • Helplessness: This is the belief that cognitive apps have already entrenched themselves in so many roles in our lives that it’s too late to put the genie and its potential adverse consequences back in the bottle.
  • Bewilderment: This is the feeling that cognitive technologies are so complex and evolving so fast that nobody, not even the top experts, understand how it all works or can rein it all in if it happens to spin out of control.
  • Deprecation: This is the sense that that organic human intelligence will no longer seem special as general-purpose cognitive computing becomes agile enough to solve problems in any sphere of knowledge.
  • Usurpation: This is the worry that cognitive-powered programs, capable of natural language communication and empathetic engagement, enable systems to simulate “human touch” with actual authentic humans in the loop.
  • Unemployment: This is the Luddite-grade anxiety that cognitive systems will automate jobs at every skill level and pay scale, resulting in massive layoffs as careers and industries disappear seemingly overnight.
  • Impersonation: This is science-fiction-fueled panic about the possibility of cognitive systems, aided and abetted by self-fabricating smart materials, spawning a new race of “Terminator”-like humanoids that can pass for human and replicate themselves at will.
  • Annihilation: This is the cartoonish nightmare about cognitive-powered robots conspiring among themselves to destroy us, or possibly under the direction of a criminal human mastermind.

Clearly, there’s a broad ecosystem of popular naysaying that surrounds cognitive computing. However, it would be inappropriate to polarize the climate of opinion into distinct “for” or “against” camps. Many people have mixed thoughts on the topic. Well-informed people know that the technology drives many positive innovations, but they also harbor justified misgivings about its potential abuse.

A perfect case in point is Elon Musk, who has not been shy in voicing his opinions regarding the potential downsides of AI in society. As the founder of Tesla and SpaceX, Musk is obviously passionate about advanced technologies, has a futuristic vision about their transformative potential, and puts ample money where his mouth is. In addition to running his transportation-related businesses, Musk has also recently funded a global research program with the vague goal of “keeping AI beneficial to humanity.”

Getting down to brass tacks, Musk also recently co-founded OpenAI, which describes itself as a “non-profit artificial intelligence research company [that has the goal of advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The group’s initial projects are focused on the technology’s use in household robots, intelligent personal advisors, and virtual mini-world gaming environments.

We should note that many people who harp on cognitive computing’s downsides also hope to make money from the technology’s incorporation into their companies’ products and services. That’s certainly the case with Musk, and for Bill Gates as well. And there’s nothing wrong with that.

If you’re passionate about cognitive computing, you’ll want to defend it against those who attack it on the basis of ill-informed, unbalanced, speculative, conspiratorial, and dystopic fantasies about its potential for misuse. That, in turn, demands redoubled vigilance at mythbusting the most unwarranted negative beliefs about these technologies.

If you’re on the same page as me on this matter, I urge you to check out this article I published last year that busts several prevailing myths about cognitive computing.


June 27, 2016  9:59 AM

TechTarget’s weekly roundup (6/20 – 6/27)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Dell, EMC, privacy

camera-monitoring-1316079-639x649
Privacy image via FreeImages

Where do you stand on the Rule 41 changes? Check out the viewpoints of each side in this week’s roundup.

1. Activists, DOJ spar over Rule 41 changes to enhance FBI searchers – Peter Loshin (SearchSecurity)

EFF and privacy activists oppose Rule 41 changes, while the Department of Justice claims the changes do not alter ‘traditional protections’ under the Fourth Amendment.

2. Dell software biz jettisoned to advance EMC buy, users’ views mixed – Ed Scannell and Robert Gates (SearchDataCenter)

Handing off most of its software business to private equity firms is one more step toward Dell’s mega-purchase of EMC, and it gives users both clarity and concerns.

3. Potential PC replacements poised for enterprise prominence – Eddie Lockhart (SearchEnterpriseDesktop)

The PC has long been the top dog in the enterprise, but new, inexpensive devices such as the Raspberry Pi and Google’s Chromebook could threaten its title.

4. Pros, newbies seek new answers to cloud questions at Cloud Expo – Jason Sparapani (SearchCIO)

Consultants and job seekers joined practitioners at the Cloud Expo conference with cloud questions on topics ranging from container storage to the internet of things.

5. End of Dell Cloud Manager shows slow growth in multicloud – Trevor Jones (SearchCloudComputing)

Shuttering Dell Cloud Manager, an early piece of Dell’s move to trusted advisor in public cloud, underscores that the multicloud market has been more hype than reality — at least so far.


June 21, 2016  3:20 PM

Eliciting high-quality data science from non-traditional sources

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Data Science, Data scientist

OLYMPUS DIGITAL CAMERA

Data image via FreeImages

By James Kobielus (@jameskobielus)

I’m a pragmatist. I like to think that you are what you do. So it you look, walk, and quack like a data scientist, you’re a data scientist, aren’t you?

This is not a metaphysical inquiry. As we encourage more people to acquire data science tools and skills, what point is there in distinguishing between data scientists and those who, for all intents and purposes, are of the same species, albeit without traditional track records, tools, and certifications?

This question occurred to me as I was reading about a new DARPA program called Data-Driven Discovery of Models (D3M). What it’s all about is enabling greater automation throughout the data-science lifecycle. The program recognizes that many of the most critical tasks will be performed by people who are new to this field and who may not fit the traditional profile of the professional data scientist. As stated by the agency, the program’s goal is to “develop algorithms and software to help overcome the data-science expertise gap by facilitating non-experts to construct complex empirical models through automation of large parts of the model-creation process.”

What’s exciting about this initiative is that it focuses on the imperative of multiplying the productivity of data-science teams. It seeks innovative approaches that use automated machine-learning algorithms to accelerate the upfront process of composing data-scientific models that are best suited to a particular analytic challenge. I like the fact that it focuses on giving subject matter experts the tools to specify the analytic challenge to be addressed, identify the data to be analyzed, and evaluate the findings from the machine-learning models that are automatically composed. I think it’s good that they’re building hooks into this environment that would allow established data scientists to evaluate the results of automated methods. And I’m encouraged that the program will also address automation of data-science initiatives that are underspecified in terms of the features to be modeled and the data sets to be analyzed.

If it realizes its objectives, DARPA’s program will enable everybody everywhere to enjoy the fruits of high-quality data-science tools. However, I take issue with the self-contradictory notion, as expressed by DARPA in its solicitation, that the subject matter experts who would use such a tool are “non-experts.” Fortunately, the agency expresses its intention more cogently at another point in the document when it states its aim of enabling “users with subject matter expertise but no data science background [to] create empirical models of real, complex processes.”

But that statement also suffers from a fundamental conceptual flaw. What DARPA spells out sounds very much like the core competency of an expert data scientist, rather than a “non-expert” dabbler. After all, the core competency of data scientists is the creation and testing of complex empirical models. No matter what their academic or professional background, data scientists specialize in identifying analytic problems to be solved; defining the principal features of that problem that can be statistically modeled; acquiring, evaluating, cleansing, and preparing data sources to be used in the modeling; and building, testing, evaluating, and refining the resultant models.

At another point in the solicitation, DARPA states that one of its program’s core objectives is to develop a framework for “formal definition of modeling problems and curation of automatically constructed models by users who are not data scientists.” But that’s a self-devouring distinction. If someone, of any background, is able to use such a tool to perform this entire lifecycle of data-science tasks, they are thereby a genuine data scientist. They are not merely some incorporeal “virtual data scientist” or robotic “automated data scientist” (to cite two marginalizing phrases that Network World uses in this article about the DARPA program). And they are not necessarily a “citizen data scientist,” in the “impassioned amateur” sense in which many construe that phrase.

When deciding whether a subject matter expert is also a bona fide data scientist, the fact that they performed these data-science functions in a largely tool-automated fashion, rather than through manual techniques, is irrelevant. DARPA’s discussion seems to be hung up on the bogus notion that “curation”—the core of their “non-data scientist” distinction–is something less than full-blooded data science. Essentially, the agency uses this term to refer to two distinct data-science lifecycle tasks: evaluating the relevance of data sources to a specific modeling problem, and assessing the predictive fit of a constructed model to that same problem. However, by anybody’s reckoning, these tasks are at the heart of professional data science. The former is central to data engineering, and the latter to data modeling.

But I should point out that, for all its scoping flaws, DARPA’s initiative is on the right track. Modeling automation initiatives such as this are driving the new era of democratized data science. If subject matter experts everywhere embrace self-service tools for high-quality data science, we will unlock a world of data-driven creativity and innovation.


June 20, 2016  3:08 PM

TechTarget’s weekly roundup (6/13 – 6/20)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Docker, HPE, Linkedin, Machine learning, Microsoft

social-connection-1624773-640x325
Social networking image via FreeImages

What can we expect from the Microsoft-LinkedIn deal? Check out all the details on the latest acquisition in this week’s roundup.

1. Microsoft-LinkedIn deal to shake up enterprise social networking – Brian Holak (SearchCIO)

The Microsoft-LinkedIn deal gives us a glimpse into a hyper-social business future. Will employees like what they see? Also in Searchlight: federal court upholds FCC net neutrality rules; big announcements from Apple’s WWDC.

2. Price isn’t everything: Google bets big on machine learning – Trevor Jones (SearchCloudComputing)

Google sees machine learning and deep analytics as the future of the cloud, as it seeks a strategy to stand out from the crowd beyond price.

3. HPE turns to Docker Engine to fuel server sales – Robert Gates (SearchDataCenter)

HPE has found a data center friend in Docker to ease container entry into the enterprise, but right now, it’s a step too far for some IT shops.

4. June Patch Tuesday addresses DNS, SMB Server vulnerabilities – Tayla Holman (SearchWindowsServer)

June’s batch of security bulletins included a number of updates to close Windows Server vulnerabilities, including a remote code execution flaw in DNS server.

5. LinkedIn could get UC features following Microsoft acquisition – Tracee Herbaugh (SearchUnifiedCommunications)

Analysts believe Microsoft will integrate UC features from Skype for Business into LinkedIn, opening up communications between users of the professional social network.


June 6, 2016  10:26 AM

TechTarget’s weekly roundup (5/30 – 6/6)

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Agile, DevOps, Mitel, Polycom, VMware NSX

hand-shake-1241578-639x420
Deal image via FreeImages

Do you think the Mitel-Polycom deal will go through? Check out the latest details in this week’s roundup.

1. Tech buyers watch fate of Mitel-Polycom deal after second offer made – Tracee Herbaugh (SearchUnifiedCommunications)

Tech buyers face benefits and drawbacks if a New York-based private equity group outbids Mitel for the video conferencing company Polycom.

2. Microsoft warns of rare ransomware worm – Michael Heller (SearchSecurity)

Microsoft warned users of a rare ransomware worm affecting older versions of Windows, but experts are wary of the recommended mitigation technique.

3. Users give thumbs-up to lower-end versions of VMware’s NSX – Ed Scannell (SearchServerVirtualization)

VMware looks to finally establish a foothold in corporate accounts with two low-end versions of NSX. But will the enterprise take the bait?

4. At Cloud Expo, get the latest thoughts on DevOps and Agile – Valerie Silverthorne (SearchSoftwareQuality)

Everyone wants to be Agile and do DevOps, but, of course, it’s harder than it seems. Find out what industry experts will be talking about at Cloud Expo.

5. Falling into the tech skills gap? Try a new recruiting tack – Jason Sparapani (SearchCIO)

Organizations forging into the digital future often come up short in a search for talent. But there are novel ways to close the tech skills gap, say execs at the MIT Sloan CIO Symposium.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: