Netskope recently obtained a second cloud security patent for its CASB platform, one that could prove extremely beneficial in an increasingly competitive cloud security market that puts a premium on intellectual property.
The CASB startup was awarded another patent earlier this year for technology that “steers” enterprise traffic to cloud applications and provides real time visibility for those apps.
The new patent covers Netskope’s technology that provides granular data governance and security policy enforcement on cloud applications. Netskope CEO Sanjay Beri called it a “broad patent” that covers the ability to set policies for cloud app usage based on a number of variables, including device type, user profile, behavioral analytics and, perhaps most importantly, what data is being accessed in that cloud app and what is being done with it.
“This is the other side to the approach,” Beri said. “The first patent was about steering traffic to control points. That’s how you get the traffic to the cloud services. The second patent is about what you do with the traffic when it gets there.”
Netskope’s policies, which customers themselves set, are able to distinguish between downloading, modifying and viewing data within cloud apps. For example, Beri said a retail customer concerned about cloud data being accessed by unmanaged personal devices could create a policy for an app’s data that allows users to view the data but not download it.
Beri said the power of the new patent is the technology’s ability to give enterprises the proper context in how their cloud apps can and will be used. Most web application firewalls use basic “block or allow” policies, he said, but CASBs like Netskope can provide more insight and control around how a cloud app and the corporate data tied to it are being used.
Technology patents don’t usually make for exciting news, but in the case of the CASB market, they provide insight into how startups have evolved – and how they’re being valuated by investors and suitors. CASBs have evolved from offering basic visibility into shadow cloud usage to providing secure gateways to cloud apps as well as access control, data protection and threat monitoring. As these CASBs have evolved and matured, they’ve patented their unique approach and developed intellectual property (IP) for those sets of services.
And those patents have proven to be valuable – perhaps more so than customer base or revenue – to potential suitors. Blue Coat Systems’ recent S1 filing for a potential IPO revealed that Perspecsys, a CASB startup acquired by Blue Coat last summer for $45.5 million, had posted revenue of just $2 million for the first half of 2015. But Michael Fey, then Blue Coat’s president and COO, said the Perspecsys deal was about obtaining the startup’s patent for tokenization technology in the cloud.
Similarly, Blue Coat acquired another CASB last November in Elastica, which, according the Blue Coat’s S1, generated just $395,000 in revenue between January and November of 2015. Yet Elastica was able to command a purchase price of $280 million, thanks in large part to its IP and patent-pending tech.
Beri said Netskope recognizes how valuable IP and patents are in the security industry, which is why they’re doubling down on research and development. “We have the largest R&D team in the CASB market,” he said. “So you’ll see more patents for us in the future.”
Outside of command line tutorials for Linux, the term “environment variable” increasingly appears right next to “security vulnerability.” Consider Shellshock — one of the worst exploitable flaws ever — which requires little more work than attaching malicious code onto an environment variable. More recently, the httpoxy vulnerability also leverages access through the HTTP_PROXY environment variable.
Are environment variables for suckers? Do we even need them anymore? Can we afford them?
SearchSecurity asked several experts whether it might be time to ditch environment variables, given that they enable vulnerabilities like Shellshock and httpoxy, or whether there are benefits to keeping them on hand.
“Environment variables are an essential part of how things run under unix/linux systems,” explained John Bambenek, manager of threat systems at Fidelis Cybersecurity in Waltham, Mass. Many environment variables are innocuous — for example, the PATH environment variable lists the directories in which the shell looks for binaries when a command is entered at the command line.
However, Bambenek said, “The problem is allowing the open internet to modify environment variables of significance — like HTTP_PROXY — that have real impact on those running applications. Accepting unauthenticated input from the world is always a very dangerous thing, reading that data into an environment variable that has real impact on the system is extremely dangerous,” and that’s what happened with the httpoxy vulnerability.
“I’m not sure how we do things without environment variables,” said Jacob Williams, founder of consulting firm Rendition InfoSec LLC, in Augusta, Ga. “They are a source of vulnerabilities, but not having them creates a whole new class of problems we’ll have to account for in the long run. I don’t know what the solution will be, but it will also create new vulnerabilities. It’s not the variables themselves, it’s the insecure use of the variables that creates problems.”
Deciding on whether it’s time to stop using environment variables depends on where they are used, according to Bill Berutti, president, performance & analytics and cloud management/data center automation, at business service management software firm BMC, based in Houston.
“For an enterprise application, it is always a good practice to pass on variables for the session of the process and not set in the environmental variables. This is a much better approach,” Berutti explained. “Nevertheless, environmental variables are useful in case of test/stage applications where there are a lot of clone applications being run on the same box to test out applications in parallel and/or it’s something standardized for all the applications running on that node.”
“There is nothing inherently wrong with environmental variables,” said Christopher Robinson, manager at Red Hat’s product security program management. Cloud services, for example, often use environment variables to distribute configuration data, though Robinson warned “programmers should always be cautious as to what data their programs accept and use for subsequent processing/directives.”
There’s no real security benefit to using environment variables, according to Lane Thames, security researcher at Tripwire Vulnerability and Exposures Research Team (VERT). “Regardless of where the data comes from (environment variable, database query, et cetera), it is up to the application that uses the data in the variables to ensure correctness and compliance.”
“I don’t know that we can without boiling the ocean,” said Dominic Scheirlinck, principal engineer at Auckland, New Zealand e-commerce firm Vend. “They’re in every new [platform as a service] and [continuous integration] system because they work well for simple, easy-to-use configuration.”
Scheirlinck, who is also the lead for the httpoxy disclosure team, added “I think it’s more likely that we should be much more careful, in the future, about accepting specs,” like the common gateway interface (CGI) specification, “that allow environment variables to be controlled by remote users.”
It’s becoming harder and harder for me to read about the glaring security holes, the bafflingly risky behaviors and the all-around worst practices of the healthcare industry and still maintain some semblance of composure and mental health. It feels as if each new study or research paper on healthcare security is pushing me closer to the point where I’ll be more terrified of seeing doctors using an outdated Windows XP client than watching them approach me with a scalpel or six-inch hypodermic needle. Or both.
Case in point: a study from researchers at Dartmouth College, the University of Pennsylvania and the University of Southern California has garnered media attention recently, as it shows the lengths doctors, nurses and clinicians will go to bypass security controls and authentication measures, which many view as impediments to their jobs. “Workarounds to computer access in healthcare are sufficiently common that they often go unnoticed,” the study reads. “Clinicians focus on patient care, not cybersecurity.”
The title of the paper – “Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient?” – offers a big hint about the mentality of medical professionals when it comes to the topic of information security. The study, in which researchers interviewed and observed hundreds of medical workers, at different hospitals and medical centers, showed how doctors, nurses and other medical professionals are so desperate to avoid cumbersome login and authentication processes that they will resort to almost absurd practices to get around them.
But in case the title and summary aren’t enough, here are some points from the study:
- “Clinicians share passwords with others so that they can read the same patients’ charts even though they might have access in common. A misbehaving hospital technician used a physician’s PIN code to create fake reports for patients.”
- The study documented “physicians ordering medications for the wrong patient because a computer was left on and the doctors didn’t realize it was open for a different patient.”
- “Nurses would circumvent the need to log out of COWs [computers on wheels] by placing “sweaters or large signs with their names on them” or hiding them or simply lowering laptop screens.”
- The study cited a previous report about how “clever clinicians at one hospital defeated proximity sensor-based timeouts by putting Styrofoam cups over the detectors, and how (at another hospital) the most junior person on a medical team is expected to keep pressing the space bar on everyone’s keyboard to prevent timeouts.”
- “One vendor even distributed stickers touting ‘to write your username and password and post on your computer monitor.’ A newspaper found a discarded computer from a practice contained a Word document of the employees’ passwords — conveniently linked from a desktop icon.”
Medical professionals, however, don’t bear the full blame for these terrible healthcare security practices. The study points out how some of the inadequate IT systems used in hospitals can promote this kind of delinquency. For example, the paper cites an example of a physician who uses the clinic’s dictation system – which has a five-minute session timeout and requires users to re-authenticate (which takes approximately one minute) to log back in; the physician told the researchers he spent nearly an hour-and-a-half logging in one day.
Another example involved a large city hospital, which required a digital thumbprint to authenticate each death certificate. But only one of the doctors on staff had thumbs that could be read by the digital reader. “Consequently, only that one doctor signs all of the death certificates, no matter whose patient the deceased was,” the report stated.
Still, however dysfunctional the technology and legacy systems at hospitals are, these access and authentication workarounds are just more examples of how easy the healthcare industry makes it for attackers to breach their networks. Attacks on hospitals and healthcare organizations are on the rise, and we’ve seen repeated examples of how vulnerable hospitals are to such attacks. It’s time the healthcare industry to address these counterproductive behaviors and woeful technology before the medical records and personally identifiable information of every U.S. citizen becomes public domain — if they aren’t already.
Symantec’s surprise announcement this week that it had agreed to acquire Blue Coat Systems for a whopping $4.65 billion in cash led to much discussion about how the purchase will affect the beleaguered antivirus giant, which has experienced well-documented struggles and setbacks in recent years. But there’s been much less focus on Blue Coat — how the company arrived at this point, and how much its investments in cloud security, specifically the cloud access security broker space, have benefited Blue Coat.
First, let’s go back in time and look at some numbers. During Blue Coat’s last full fiscal year as a public company (ended April 30th, 2011), the company posted was $487 million in revenue. In the announcement for its FY 2011 financials, then-CEO Mike Borman didn’t mince words about the disappointment around those results. “I am not satisfied with our revenue performance, as we did not deliver the top-line results that I believe we are capable of,” he said.
Later that year, private equity firm Thoma Bravo acquired Blue Coat for $1.3 billion. At that point, Blue Coat’s focus was on the WAN optimization market (Borman highlighted the company’s MACH5 technology and PacketShaper products as bright spots in the otherwise lackluster FY 2011 results) rather than cloud security. Fast forward to March of 2015, and Thoma Bravo sold Blue Coat to Bain Capital for $2.4 billion (Thoma Bravo reportedly had talks to sell the company to Raytheon the previous year).
Fast forward again to this month, and Bain sold Blue Coat for almost twice what it had paid for the company about a year earlier. So what changed over those 15 months? Blue Coat has obviously strengthened its presence in the web security gateway market in recent years (the company is routinely listed as a market leader by analyst firms such as Gartner and IDC). But the vendor also made some aggressive acquisitions of its own last year in the CASB market, starting with the purchase of Perspecsys, a startup based in McLean, Va., in July (terms of the deal were not disclosed). And just a few months later, Blue Coat acquired another CASB startup, paying $280 million for San Jose-based Elastica. And Blue Coat isn’t the only company getting in on the CASB market, as Microsoft bought Adallom for a reported $320 million.
Why did Blue Coat acquire two CASBs? As Blue Coat COO Michael Fey explained to SearchCloudSecurity last year, Elastica and Perspecsys had distinctly different approaches to the CASB model. Perspecsys offered patented tokenization technology to protect corporate data moving to and from cloud applications, while Elastica’s platform concentrates on SaaS monitoring and usage analysis. Blue Coat began integrating both CASB offerings with its web security gateway products under the company’s “Cloud Generation Gateway” strategy.
Both acquisitions appear to have benefited the company, which is now seen as a leader in the CASB space. Several analysts have applauded the move as a way to bolster Symantec’s presence in the cloud security market. Barron’s, for example, noted that while 80% of Blue Coat’s revenue comes from the slower-growing web security gateway market while the remaining 20% comes from the “more promising” CASB space.
It’s impossible to determine how much value the CASB deals have added to Blue Coat, as the company is privately held. But clearly Blue Coat has become a hotter commodity than it was when the company changed owners last year; it’s worth noting that less than two week before Symantec paid $4.65 billion for the company, Blue Coat announced plans for an IPO (according to Business Insider, the company filed its IPO plans under the JOBS Act, which allows smaller businesses to file for IPOs privately with the SEC). But Symantec — apparently sensing some urgency — snatched up Blue Coat before the IPO could take place.
Time will tell if the acquisition turns out to be a success, let alone the transformational move Symantec needs to move beyond its legacy antivirus business. For now, however, it looks like the CASB acquisitions were lucrative investments Blue Coat, and I’d expect interest in the market from IT vendors and enterprises alike to continue to grow.
Last week, Google showed off a new messaging app called Allo. The reaction to that announcement was either extremely positive or negative, depending on who was speaking. General consumers liked the product because it built Google smarts into a messaging app, while privacy proponents decried the fact that end-to-end encryption was not a default feature of the app.
Edward Snowden even weighed in on the matter:
Google’s decision to disable end-to-end encryption by default in its new #Allo chat app is dangerous, and makes it unsafe. Avoid it for now.
— Edward Snowden (@Snowden) May 19, 2016
Yes, the way that Allo is designed does leave a small point of access for a court order — Google servers can read messages in order to offer smart replies and contextual search data before immediately deleting the message. But Snowden’s assertion that this somehow makes the app “dangerous” and “unsafe” is hyperbolic at best, and at worst it makes it clear that Snowden has forgotten that not everyone on Earth is a fugitive from the law.
The choice doesn’t need to be a strict binary of safe/unsafe depending on if encryption is the default, because if that becomes true there’s no way to evolve messaging services. Google is in a unique position where the company is pushing artificial intelligence and machine learning, features that simply don’t work without access to data. Google may only want to add search results and suggestions to chat, and enterprise security relies on AI and machine learning for behavioral analytics and advanced malware detection. These features cannot exist in a world where encryption is the default.
Aside from that, the idea that a lack of encryption is the same as a lack of security ignores the fact that encryption was never designed to be the default. The aim of encryption was always to protect sensitive data, not to protect every word communicated between two parties. In this vein, Allo is the true expression of encryption — when you’re talking about restaurants, you can get Google suggestions because the chat is unencrypted, but when you’re talking about something sensitive (the definition of which is personal to everyone), you can switch to Incognito mode in order to be “safe” (as Snowden defines it) from the government’s prying eyes.
The aim of the encryption debate should be to make users aware of how to protect themselves and the ways that security is vulnerable either to legal orders or hackers. Pushing the idea that encryption is the only form of safety is both antithetical to how the technology is supposed to work and a gross simplification of what users want and need from that technology.
According to market forecasts, more companies are investing in cybersecurity and that spending is likely to increase dramatically in the next few years.
MarketsandMarkets has forecast cybersecurity growth at $170.21 billion worldwide by 2020, up from $106.32 billion in 2015. This outlook includes both technologies and services, such as those offered by managed security service providers. North America is expected to have the largest cybersecurity spending and adoption, followed by “significant growth” in Latin America and Asia Pacific, according to researchers.
In the United States, President Obama put forth a Cybersecurity National Action Plan in February 2016 that if approved, allots $19 billion to cybersecurity across the federal government (and private sector) as part of the Fiscal Year 2017 budget—that’s a 35% increase over the FY 2016 budget. The Office of Personnel Management breach, discovered in April 2015, which exposed the personally identifiable information of federal employees (and interviewees) may have added to the sense of urgency. A $3.1 billion Information Technology Modernization Fund aimed at updating government technology and cybersecurity efforts is also part of CNAP.
It sounds like a lot of money, but you might still wonder whether cybersecurity spending is too low. That’s often the case in the private sector, despite analysts’ rosy forecasts. Many companies don’t spend money on cybersecurity until there’s a security incident and the curtains are pulled back. The Bangladesh central bank breached by hackers in February to the tune of $81 million was using “second hand, $10 switches” and lacked firewalls in its local networks, said law enforcement, when those systems interfaced with the SWIFT financial messaging platform, according to several published reports.
Organizations may not invest in cybersecurity because the returns (savings) do not directly influence the bottom line. Is the new technology or service worth it? Configured and tuned correctly? Monitored by skilled staff?
Centralized security management may also play a part. As Adam Rice, global CISO at Cubic, noted in his article, “Can cybersecurity spending protect the U.S. government?” increasing technology investments isn’t the only answer. CISOs need to be put in place and given the resources and support to effectively do their jobs.
For some companies, ‘rent’ a CISO programs (offered by IBM among others) may provide help building security programs—and prioritizing cybersecurity investments. Board-level cybersecurity discussions and handwringing may not increase spending until a security incident forces greater investment, however. And even then, the return on investment is a bit of a crapshoot.
As Senator Angus King (Ind.) of Maine said recently on Bloomberg’s “Political Capital with Al Hunt” when asked about former Security of State Hillary Clinton’s email controversy, “The irony is that the State Department’s servers were hacked and hers wasn’t.”
During the legal battle between Apple and the FBI over gaining access to an iPhone used by one of the San Bernardino shooters in December’s terrorist attack, an unexpected development thrust enterprise mobile management software in general — and EMM vendor MobileIron specifically – into the limelight in one of the biggest technology controversies in recent years.
Earlier this year, Reuters reported that the San Bernardino county government had deployed MobileIron’s EMM software on many of the mobile devices used by county employees — but that former employee and San Bernardino shooter Syed Rizwan Farook was not among them. Had the EMM software been installed on the government-owned iPhone assigned to Farook, the county could have remotely unlocked the device and gained access to it.
Why wasn’t MobileIron’s EMM software on Farook’s iPhone? According to a San Bernardino County spokesperson who spoke to The Wall Street Journal, Farook, a restaurant health inspector, wasn’t the type of employee who had access to sensitive government data, and therefore the county determined that MobileIron’s EMM app wasn’t needed on that device.
I recently spoke with Ojas Rege, MobileIron’s vice president of strategy, who talked about the San Bernardino county government’s decision and the iPhone controversy in general.
“The county looked at [Farook’s] device at the time and decided since there wasn’t going to be any proprietary information or sensitive data on it, then it didn’t need the EMM software,” Rege said. “They decided they didn’t need to secure it, and instead they secured other devices.”
On the surface, that decision probably made perfect sense at the time — organizations focus their mobile security policies primarily around protecting the data and applications on the device. And if there are none on the device, then it doesn’t need EMM or mobile device management (MDM) software, the precursor to EMM.
But this approach doesn’t account for how an employee can abuse the device or misuse it for malicious purposes. “You wouldn’t give a new entry-level hire a laptop without any security software on it or the ability for IT to access to it,” Rege said.
And in terms of Farook’s iPhone, the issue becomes thornier; the county required that employees, including Farook, use a four-digit passcode to protect government-owned devices, and it set all phones to be wiped after 10 failed passcode attempts (this setting prevented law enforcement officials from accessing the device). Clearly the San Bernardino government was concerned about Farook’s iPhone potentially falling into the wrong hands, despite feeling the data on the device was not worth protecting — it just didn’t consider the wrong hands would belong to Farook.
That is a major oversight for enterprises and governments alike, according Rege; organizations need to consider more than just the data and applications on the device and prepare for how the device itself may be misused (for example, an employee could download fake or malicious apps that could spread malware to other enterprise devices or systems). And Rege has some data to back up his case for having EMM software on virtually every enterprise device.
The MobileIron Security Labs (MISL) division earlier this year released its first quarterly Mobile Security and Risk Review report for the fourth quarter of 2015, which included research culled from MobileIron customers. The report showed that 50% of enterprises survey had at least one device that was non-complaint with the company’s mobile security policies at any given time; typical reasons for non-compliance, according to the report, were:
- Missing, lost or stolen devices (33% of MobileIron customers)
- Employees removing passcode/PIN protection (22%)
- Employees removing MDM apps (5%)
These types of non-compliance don’t necessarily mean an enterprise employee is using his or her device for malicious purposes. And to be sure, the chances that an enterprise will find itself in the same position as San Bernardino County – struggling to unlock a company iPhone that could have been used in a terrorist attack committed by one of its employees – is probably very low.
But there are other risks and threats for enterprises to consider. And given how powerful mobile devices have become, and how the devices could be used for malicious purposes, Rege argued that enterprises should consider installing EMM software on every device, regardless of what information or applications are actually on the device or what type of employee is using it. Without it, an enterprise can’t gain visibility into suspicious user activity or prevent an employee from jailbreaking a device and disabling its passcode protections.
“The way you secure the desktop is going to be the way you secure mobile devices,” Rege said. “Mobile [adoption] sneaks up on people. Before you know, you have 1,000 iPhones.”
It’s unclear what effect the San Bernardino case will have on how enterprises view EMM and mobile security. But in the context of Farook’s iPhone and San Bernardino County’s decision not to install EMM on it, the MISL quarterly report ends with an eerily prescient note:
“For most enterprises, mobile security strategies are still maturing. Analytics based on the prevalence of identifiable vulnerabilities in mobile devices, apps, networks, and user behavior are key to developing better tactics and tools to reduce the impact of these vulnerabilities,” the report states. “Enterprises with an EMM solution in place generally have many of the tools they need; they just need to activate them.”
Branding a security threat with a catchy nickname isn’t new but the practice has evolved over time. Nicknames used to be for worms or viruses (Melissa, Code Red, etc.) and most were named by those who created the code itself, like the Conficker worm or Blaster, which was a worm packaged in a file named MSBlast.exe.
More recently, the trend has been to brand vulnerabilities with punchy marketing names like Heartbleed, VENOM, and Badlock, and give them logos too. The idea with newer efforts began with creating more understanding of the issue. For example, ShellShock covered a number of vulnerabilities that affected the Bash shell, and Heartbleed related to the TLS heartbeat extension.
At first, this practice was praised because it made it easier for the population to understand a problem and arguably led to higher rates of remediation. The idea was that execs who didn’t know much about security would take interest in the patching of said flaw, raising patch rates, and branding made reporting on vulnerabilities easier. Although, even this benefit has been under scrutiny given the number of servers still vulnerable to Heartbleed.
Unfortunately, there has never been much consistency to the practice and it has begun to feel as though branding a vulnerability is marketing for the researcher (team or individuals) behind the disclosure rather than making it easier to talk about the flaw.
Some branded vulnerabilities have been legitimate security risks (Heartbleed and ShellShock); others never saw measurable numbers of exploits in the wild even with proof-of-concept exploits created (VENOM, Stagefright, GHOST or Rowhammer); and beyond both of those examples were the vulnerabilities that were serious security risks but never received branding.
The exclusion of that last group makes sense, partly because if anyone tried naming every Flash vulnerability packed into an exploit kit, they would run out of words before running out of issues, but also because, as Red Hat succinctly put it in a Venn diagram — the overlap between branded vulnerabilities and security issues that matter is not that big.
It may be easier to rally behind a threat with a name, but that doesn’t make it the most dangerous and only serves to muddy the water. And in the extreme, a vulnerability like Badlock is branded weeks before it is disclosed, breeding fear with no option for mitigation and giving criminals time to find and exploit the flaw.
Ultimately, if branding doesn’t have a clear purpose beyond marketing the research team that discloses the vulnerability, it could create more issues. At the very least, IT departments would have their time and resources wasted on lower priority flaws and at worst enterprises will be left at risk by putting resources into the wrong fixes.
This is our own fault.
That was my first thought when I read the news last week that U.S. Magistrate Judge Sheri Pym had ordered Apple to assist the FBI in bypassing the security measures on a locked iPhone that belonged to one of the deceased San Bernardino shooters.
And when I say “our own fault,” I mean the technology industry, and specifically the information security sector. Because too many people were asleep at the wheel while all the encryption backdoor talk and “going dark” nonsense was being throw about on Capitol Hill and the campaign trails. And now the encryption debate has not only been taken to a higher level, but it’s also been pushed in a perilous direction for the tech industry.
Most security experts seem to agree that forcing Apple to write a custom software tool that will bypass the iPhone passcode lock and/or disable the auto-wipe feature for failed login attempts is a bad idea, if for no other reason than that such a tool could fall into the wrong hands and undermine the security of every iOS device in world (to say nothing of the potential abuses of even the most well-meaning law enforcement agents). But now experts and tech vendors are scrambling to communicate those concerns (and many others about Judge Pym’s order) and are effectively playing catch up to the government’s campaign to undermine strong encryption, which has been rolling in recent months.
While I don’t think any amount of pro-encryption pushback from the tech community was going to prevent Judge Pym from issuing this order, such efforts would have at least set the stage for strong opposition against government-mandated backdoors and sent a message to lawmakers and politicians. Remember, this is the same community and industry that effectively shut the Stop Online Piracy Act (SOPA) in 2012 following large-scale Internet blackout protests. The ability to influence public policy was there; we just didn’t use it.
And we missed or outright disregarded the numerous warning signs that this was coming. While the Obama Administration and FBI Director James Comey said they would not be seeking legislative remedies to the “going dark” problem, Comey made numerous speeches (four in the month of October alone) before Congress and the public about the dangers of encryption (while pro-encryption testimony from tech experts has largely been absent). Meanwhile, politicians and government officials were doing everything they could to blame tragedies like the Paris terrorist attack on encrypted communications while publicly stating their opposition to strong encryption.
I’m not sure why the tech community was so complacent about this. But during a dinner with media members back in December, RSA President Amit Yoran spent the better part of an hour discussing the issues around encryption and “going dark,” and he said something very telling at the time. Just a few days earlier, Sen. Dianne Feinstein (D-Calif.) had said she would lead an effort (after yet another instance of Congressional testimony on encryption from Comey) to “pierce” encryption and compel technology manufacturers to decrypt any and all data at the request of law enforcement.
“This is quite possibly one of the most absurd public policy proposals in recent decades. It just shows a complete lack of understanding as to how technology works,” Yoran said. “I can’t imagine anyone [in the private sector] is going to support that.”
Fine, I said — that’s the private sector. But I argued that if you step back from the tech industry, you’d be surprised at how much public support there is to break encryption and give law enforcement access to all data. A recent poll about the Apple court order supports that argument.
To use an infosec analogy, the industry saw an impending threat and incorrectly assessed the risk before it was too late.
And that brings us to RSA Conference 2016. The world’s largest information security event begins next week, with arguably the most important tech policy issue of our time looming over it: the government’s intent to force technology companies to break their own products and fundamentally undermine security. We can go in one of two directions at RSA Conference. The leading infosec voices and tech leaders can continue to offer tepid support for Apple and try to shrug off the government’s anti-encryption efforts, or they can finally and collectively take a stand and start working to reverse the tide of public opinion on encryption, or at the very least educate the public on the matter.
I’m not optimistic that the industry will move in the latter direction at RSA Conference next week. I think most companies have been secretly content to have Apple, the world’s largest and most popular technology company, take the lead on this issue and allow them to avoid the potential bad press. And I’m not sure how much has changed in recent days.
But I do know we can’t afford to let Tim Cook stand out on an island alone for this fight.
It wasn’t that long ago that endpoint security was viewed as an afterthought (and some might argue that for a lot of folks, it still is). As enterprises and security managers scrambled to shore up the perimeter defenses and protect the corporate network, it felt like attending to the security needs of client devices fell further down the priority list until some punted on it entirely.
But with the rise of mobile devices and BYOD, not to mention growing adoption of cloud applications and SaaS offerings, the importance of endpoint security is coming back into focus. And that’s a good thing for Morphisec, an Israeli security startup that specializes in what’s known as “moving target defense.”
Morphisec CEO Ronen Yehoshua said his company uses “a new kind of prevention technology” that uses polymorphism to confuse would-be attackers. In other words, Morphisec’s Endpoint Protector technology disguises the true nature of a device by making it appear differently than it actually is; the product randomly changes information about a device and its applications — without modifying the underlying structure of the OS or applications – to confuse hackers and cybercriminals.
As a result, Yehoshua said, attackers will spin their wheels devising malware for a fictitious device profile only to find the malicious code they developed doesn’t work on the target. There are other features of Endpoint Protector, such as “contextual forensics” for increased visibility of attacks, but the moving target defense is the big differentiator.
Yehoshua said Morphisec developed its technology with the aim of lessening the burden of defending endpoint devices by giving enterprises the ability to be proactive and fool attackers. “Companies are struggling, and the never-ending patching cycle is hard to keep up with. The software patches and the software itself keep getting bigger and bigger,” he said. “This is a simple way to prevent attacks on the endpoint by fooling the attackers.”
In addition, Yehoshua said he doesn’t believe enterprises should concede endpoint devices to attackers because many catastrophic breaches start with an attack on a single user in an effort to steal account credentials and gain access to enterprise infrastructure. “People understand now that to stop an advanced attack, you have to protect the endpoint,” he said.
Morphisec’s Endpoint Protector is currently in beta stage with customers, and the company expects it to be generally available at RSA Conference 2016 in early March.