Security Bytes


May 13, 2016  8:43 PM

EMM software on every device? MobileIron makes the case

Rob Wright Profile: Rob Wright

During the legal battle between Apple and the FBI over gaining access to an iPhone used by one of the San Bernardino shooters in December’s terrorist attack, an unexpected development thrust enterprise mobile management software in general — and EMM vendor MobileIron specifically – into the limelight in one of the biggest technology controversies in recent years.

Earlier this year, Reuters reported that the San Bernardino county government had deployed MobileIron’s EMM software on many of the mobile devices used by county employees — but that former employee and San Bernardino shooter Syed Rizwan Farook was not among them. Had the EMM software been installed on the government-owned iPhone assigned to Farook, the county could have remotely unlocked the device and gained access to it.

Why wasn’t MobileIron’s EMM software on Farook’s iPhone? According to a San Bernardino County spokesperson who spoke to The Wall Street Journal, Farook, a restaurant health inspector, wasn’t the type of employee who had access to sensitive government data, and therefore the county determined that MobileIron’s EMM app wasn’t needed on that device.

I recently spoke with Ojas Rege, MobileIron’s vice president of strategy, who talked about the San Bernardino county government’s decision and the iPhone controversy in general.

“The county looked at [Farook’s] device at the time and decided since there wasn’t going to be any proprietary information or sensitive data on it, then it didn’t need the EMM software,” Rege said. “They decided they didn’t need to secure it, and instead they secured other devices.”

On the surface, that decision probably made perfect sense at the time — organizations focus their mobile security policies primarily around protecting the data and applications on the device. And if there are none on the device, then it doesn’t need EMM or mobile device management (MDM) software, the precursor to EMM.

But this approach doesn’t account for how an employee can abuse the device or misuse it for malicious purposes. “You wouldn’t give a new entry-level hire a laptop without any security software on it or the ability for IT to access to it,” Rege said.

And in terms of Farook’s iPhone, the issue becomes thornier; the county required that employees, including Farook, use a four-digit passcode to protect government-owned devices, and it set all phones to be wiped after 10 failed passcode attempts (this setting prevented law enforcement officials from accessing the device). Clearly the San Bernardino government was concerned about Farook’s iPhone potentially falling into the wrong hands, despite feeling the data on the device was not worth protecting — it just didn’t consider the wrong hands would belong to Farook.

That is a major oversight for enterprises and governments alike, according Rege; organizations need to consider more than just the data and applications on the device and prepare for how the device itself may be misused (for example, an employee could download fake or malicious apps that could spread malware to other enterprise devices or systems). And Rege has some data to back up his case for having EMM software on virtually every enterprise device.

The MobileIron Security Labs (MISL) division earlier this year released its first quarterly Mobile Security and Risk Review report for the fourth quarter of 2015, which included research culled from MobileIron customers. The report showed that 50% of enterprises survey had at least one device that was non-complaint with the company’s mobile security policies at any given time; typical reasons for non-compliance, according to the report, were:

  • Missing, lost or stolen devices (33% of MobileIron customers)
  • Employees removing passcode/PIN protection (22%)
  • Employees removing MDM apps (5%)

These types of non-compliance don’t necessarily mean an enterprise employee is using his or her device for malicious purposes. And to be sure, the chances that an enterprise will find itself in the same position as San Bernardino County – struggling to unlock a company iPhone that could have been used in a terrorist attack committed by one of its employees – is probably very low.

But there are other risks and threats for enterprises to consider. And given  how powerful mobile devices have become, and how the devices could be used for malicious purposes, Rege argued that enterprises should consider installing EMM software on every device, regardless of what information or applications are actually on the device or what type of employee is using it. Without it, an enterprise can’t gain visibility into suspicious user activity or prevent an employee from jailbreaking a device and disabling its passcode protections.

“The way you secure the desktop is going to be the way you secure mobile devices,” Rege said. “Mobile [adoption] sneaks up on people. Before you know, you have 1,000 iPhones.”

It’s unclear what effect the San Bernardino case will have on how enterprises view EMM and mobile security. But in the context of Farook’s iPhone and San Bernardino County’s decision not to install EMM on it, the MISL quarterly report ends with an eerily prescient note:

“For most enterprises, mobile security strategies are still maturing. Analytics based on the prevalence of identifiable vulnerabilities in mobile devices, apps, networks, and user behavior are key to developing better tactics and tools to reduce the impact of these vulnerabilities,” the report states. “Enterprises with an EMM solution in place generally have many of the tools they need; they just need to activate them.”

April 8, 2016  5:29 PM

Vulnerability branding becomes another marketing tool

Michael Heller Michael Heller Profile: Michael Heller

Branding a security threat with a catchy nickname isn’t new but the practice has evolved over time. Nicknames used to be for worms or viruses (Melissa, Code Red, etc.) and most were named by those who created the code itself, like the Conficker worm or Blaster, which was a worm packaged in a file named MSBlast.exe.

More recently, the trend has been to brand vulnerabilities with punchy marketing names like Heartbleed, VENOM, and Badlock, and give them logos too. The idea with newer efforts began with creating more understanding of the issue. For example, ShellShock covered a number of vulnerabilities that affected the Bash shell, and Heartbleed related to the TLS heartbeat extension.

At first, this practice was praised because it made it easier for the population to understand a problem and arguably led to higher rates of remediation. The idea was that execs who didn’t know much about security would take interest in the patching of said flaw, raising patch rates, and branding made reporting on vulnerabilities easier. Although, even this benefit has been under scrutiny given the number of servers still vulnerable to Heartbleed.

Unfortunately, there has never been much consistency to the practice and it has begun to feel as though branding a vulnerability is marketing for the researcher (team or individuals) behind the disclosure rather than making it easier to talk about the flaw.

Some branded vulnerabilities have been legitimate security risks (Heartbleed and ShellShock); others never saw measurable numbers of exploits in the wild even with proof-of-concept exploits created (VENOM, Stagefright, GHOST or Rowhammer); and beyond both of those examples were the vulnerabilities that were serious security risks but never received branding.

The exclusion of that last group makes sense, partly because if anyone tried naming every Flash vulnerability packed into an exploit kit, they would run out of words before running out of issues, but also because, as Red Hat succinctly put it in a Venn diagram — the overlap between branded vulnerabilities and security issues that matter is not that big.

It may be easier to rally behind a threat with a name, but that doesn’t make it the most dangerous and only serves to muddy the water. And in the extreme, a vulnerability like Badlock is branded weeks before it is disclosed, breeding fear with no option for mitigation and giving criminals time to find and exploit the flaw.

Ultimately, if branding doesn’t have a clear purpose beyond marketing the research team that discloses the vulnerability, it could create more issues. At the very least, IT departments would have their time and resources wasted on lower priority flaws and at worst enterprises will be left at risk by putting resources into the wrong fixes.


February 24, 2016  7:58 PM

RSA Conference 2016: An opportunity to take a stand

Rob Wright Profile: Rob Wright
RSA Conference

This is our own fault.

That was my first thought when I read the news last week that U.S. Magistrate Judge Sheri Pym had ordered Apple to assist the FBI in bypassing the security measures on a locked iPhone that belonged to one of the deceased San Bernardino shooters.

And when I say “our own fault,” I mean the technology industry, and specifically the information security sector. Because too many people were asleep at the wheel while all the encryption backdoor talk and “going dark” nonsense was being throw about on Capitol Hill and the campaign trails. And now the encryption debate has not only been taken to a higher level, but it’s also been pushed in a perilous direction for the tech industry.

Most security experts seem to agree that forcing Apple to write a custom software tool that will bypass the iPhone passcode lock and/or disable the auto-wipe feature for failed login attempts is a bad idea, if for no other reason than that such a tool could fall into the wrong hands and undermine the security of every iOS device in world (to say nothing of the potential abuses of even the most well-meaning law enforcement agents). But now experts and tech vendors are scrambling to communicate those concerns (and many others about Judge Pym’s order) and are effectively playing catch up to the government’s campaign to undermine strong encryption, which has been rolling in recent months.

While I don’t think any amount of pro-encryption pushback from the tech community was going to prevent Judge Pym from issuing this order, such efforts would have at least set the stage for strong opposition against government-mandated backdoors and sent a message to lawmakers and politicians. Remember, this is the same community and industry that effectively shut the Stop Online Piracy Act (SOPA) in 2012 following large-scale Internet blackout protests. The ability to influence public policy was there; we just didn’t use it.

And we missed or outright disregarded the numerous warning signs that this was coming. While the Obama Administration and FBI Director James Comey said they would not be seeking legislative remedies to the “going dark” problem, Comey made numerous speeches (four in the month of October alone) before Congress and the public about the dangers of encryption (while pro-encryption testimony from tech experts has largely been absent). Meanwhile, politicians and government officials were doing everything they could to blame tragedies like the Paris terrorist attack on encrypted communications while publicly stating their opposition to strong encryption.

I’m not sure why the tech community was so complacent about this. But during a dinner with media members back in December, RSA President Amit Yoran spent the better part of an hour discussing the issues around encryption and “going dark,” and he said something very telling at the time. Just a few days earlier, Sen. Dianne Feinstein (D-Calif.) had said she would lead an effort (after yet another instance of Congressional testimony on encryption from Comey) to “pierce” encryption and compel technology manufacturers to decrypt any and all data at the request of law enforcement.

“This is quite possibly one of the most absurd public policy proposals in recent decades. It just shows a complete lack of understanding as to how technology works,” Yoran said. “I can’t imagine anyone [in the private sector] is going to support that.”

Fine, I said — that’s the private sector. But I argued that if you step back from the tech industry, you’d be surprised at how much public support there is to break encryption and give law enforcement access to all data. A recent poll about the Apple court order supports that argument.

To use an infosec analogy, the industry saw an impending threat and incorrectly assessed the risk before it was too late.

And that brings us to RSA Conference 2016. The world’s largest information security event begins next week, with arguably the most important tech policy issue of our time looming over it: the government’s intent to force technology companies to break their own products and fundamentally undermine security. We can go in one of two directions at RSA Conference. The leading infosec voices and tech leaders can continue to offer tepid support for Apple and try to shrug off the government’s anti-encryption efforts, or they can finally and collectively take a stand and start working to reverse the tide of public opinion on encryption, or at the very least educate the public on the matter.

I’m not optimistic that the industry will move in the latter direction at RSA Conference next week. I think most companies have been secretly content to have Apple, the world’s largest and most popular technology company, take the lead on this issue and allow them to avoid the potential bad press. And I’m not sure how much has changed in recent days.

But I do know we can’t afford to let Tim Cook stand out on an island alone for this fight.


January 29, 2016  2:30 PM

Morphisec plans to bring back endpoint security – with a twist

Rob Wright Profile: Rob Wright

It wasn’t that long ago that endpoint security was viewed as an afterthought (and some might argue that for a lot of folks, it still is). As enterprises and security managers scrambled to shore up the perimeter defenses and protect the corporate network, it felt like attending to the security needs of client devices fell further down the priority list until some punted on it entirely.

But with the rise of mobile devices and BYOD, not to mention growing adoption of cloud applications and SaaS offerings, the importance of endpoint security is coming back into focus. And that’s a good thing for Morphisec, an Israeli security startup that specializes in what’s known as “moving target defense.”

Morphisec CEO Ronen Yehoshua said his company uses “a new kind of prevention technology” that uses polymorphism to confuse would-be attackers. In other words, Morphisec’s Endpoint Protector technology disguises the true nature of a device by making it appear differently than it actually is; the product randomly changes information about a device and its applications — without modifying the underlying structure of the OS or applications – to confuse hackers and cybercriminals.

As a result, Yehoshua said, attackers will spin their wheels devising malware for a fictitious device profile only to find the malicious code they developed doesn’t work on the target. There are other features of Endpoint Protector, such as “contextual forensics” for increased visibility of attacks, but the moving target defense is the big differentiator.

Yehoshua said Morphisec developed its technology with the aim of lessening the burden of defending endpoint devices by giving enterprises the ability to be proactive and fool attackers. “Companies are struggling, and the never-ending patching cycle is hard to keep up with. The software patches and the software itself keep getting bigger and bigger,” he said. “This is a simple way to prevent attacks on the endpoint by fooling the attackers.”

In addition, Yehoshua said he doesn’t believe enterprises should concede endpoint devices to attackers because many catastrophic breaches start with an attack on a single user in an effort to steal account credentials and gain access to enterprise infrastructure. “People understand now that to stop an advanced attack, you have to protect the endpoint,” he said.

Morphisec’s Endpoint Protector is currently in beta stage with customers, and the company expects it to be generally available at RSA Conference 2016 in early March.


January 27, 2016  10:09 PM

How millennials can be the saviors — not the scourge — of the security staffing shortage

Madelyn Bacon Madelyn Bacon Profile: Madelyn Bacon

The security industry is suffering from a complex staffing shortage, and the dreaded millennials might just be the answer to this problem. Some in the industry disagree because “millennial” is a bad word. Millennials have been deemed spoiled, entitled, moochers who are trying to break into your workplace and demand the jobs the older generations worked harder and longer for.

But there are those who see that millennials are a diverse group of people who aren’t just threatening to get into the workforce; they have arrived and they intend to work hard. (And to be clear, they don’t like being called millennials. I know this because I am one, and I prefer to identify as the Harry Potter generation).

So those in the security world who recognize the positive side of hiring young people have a question: How do we get millennials interested in working in security?

You don’t need a translator to communicate with millennials. There is no “trick” to getting them interested in security. The problem isn’t with the age demographic; it’s with the lack of respected, effective and widely available security education.

Technology is barely taught in the average high school, never mind cybersecurity. Technology degree programs in colleges and universities are everywhere now, but where are the cybersecurity degree programs?

The “trick” to reach younger people isn’t by developing a popular iPhone app or getting a catchy hashtag trending, it’s setting up broadly-offered and well-respected education programs that provide real expertise. How are millennials going to learn if there is no one willing to teach them?

In my experience, millennials want jobs that matter on a grander scale. This generation recognizes the problems in the world and wants to make a difference. Perhaps that makes them idealists, but it also means that part of teaching millennials should be showing them why security matters. This task seems to be getting easier because of all the highly publicized hacks and data breaches happening. Cybersecurity is becoming unavoidable, so millennials will discover it to be a valuable and fulfilling career path. But not if they aren’t taught how security actually works, and not if they’re treated like locusts by their older, biased coworkers. They need guidance and education from the generations who have the benefit of experience.

Some argue that millennials threaten security and privacy, so why let them into the field? Because one day you are going to retire, or die, or become a whistleblower and disappear to Russia, and someone will have to keep the world secure in your absence. Train millennials early and well, so that by the time you’re gone, there’s someone to keep working who has had the benefit of absorbing wisdom from your experience.


January 15, 2016  9:46 PM

Raising awareness: Cisco takes aim at shadow IT

Kathleen Richards Profile: Kathleen Richards
Cloud Security, Cloud Services, Shadow IT

Most people have encountered the cloud at work, whether it’s downloading files from an external business contact’s Dropbox or hearing through the grapevine that your department is now “moving” to a cloud service. Security controls in many of these initiatives is handled (you hope) by someone at the company who is setting policy about these rollouts and what types of applications and sensitive data can be placed in the cloud.

Turns out, that isn’t happening as much as it should. Shadow IT spending often may exceed 30% of IT budget spending, according to Matt Cain, research vice president at Gartner, who expects that number to go up because employees want applications and services before IT can authorize and support them.

This week Cisco is rolling out Cloud Consumption as a Service to help large to mid-sized businesses track their employees’ use of shadow cloud services. Cisco also offers consumption services through its global service organization.

According to Cisco, Cloud Consumption as a Service will help companies monitor public cloud services in use and better control data protection, cost and regulatory compliance. The software as a service (SaaS) enables companies to sort cloud providers by risk, remove redundant services and benchmark cloud usage to help CIOs gain control of costs, which are sometimes hidden. In addition to discovering shadow services and identifying who is using them; security professionals can create triggers and alerts to detect abnormal usage patterns. The cost of these services is $1 to $2 per month per employee.

Sounds like a good idea, but tracking cloud usage isn’t the only problem.

Many security professionals describe shadow IT services as “circumventing” IT and security teams. In some cases, that is probably true. According to Gigaom Research, 81% of employees admitted to using unauthorized public cloud services but only 38% deliberately avoided the IT approval process.

In many organizations, there is still no policy in place about public cloud services and how they should be handled at the company. Do you really need a tool to monitor cloud usage or a policy that IT and department heads can use as guidance for these cloud initiatives? The problem often isn’t circumvention or too much Red tape—it’s a failure to get policy and procedures (and bodies) in place in front of a coming storm. Virtualization machines that go unchecked; unpatched legacy applications that newbies on staff don’t really know how to fix; and now, with a mobile workforce that has moved beyond Sharepoint, the inevitable cloud sprawl.

When security questions come up in a department meeting about a cloud deployment and people look confused and say, “I think Joe in IT probably knows the answer to that question.” Turns out he doesn’t.


January 8, 2016  7:27 PM

Cybersecurity and CES 2016: A comedy of omissions

Rob Wright Profile: Rob Wright
CES

We’re doomed.

I try not to be too much of an alarmist when it comes to information security matters, because all things considered, the situation on a global level could be a lot worse. We could all be suffering from malware –induced power grid outages like the Ukraine, or experiencing stunning invasions of privacy like the poor souls who bought Internet-connected VTech toys for their kids and just didn’t know any better.

The latter situation is more troubling when viewed through the prism of the bright lights and drab histrionics of Las Vegas this week. The Consumer Electronics Show has never really been a home for information security companies, and having attended the show for several years in the past, I didn’t actually expect that to change this year despite the increasing number of enterprise data breaches. After all, CES is about gadgets and TVS and cool tech that people actually want, not need.

But I expected more than security highlights than an iris scan-enabled ATM from EyeLock, a wireless-enabled video security camera puzzlingly named the “Stick Up Cam,” and an Internet-connected home surveillance devices from – get this – VTech (!!!). When the most interesting infosec offering of the week is a “privacy guard” smartphone case that provides a Faraday cage around your beloved gadget, then that’s not exactly a great sign.

CES 2016 wasn’t a complete no-show for security. The show did, in fact, have an all-day cybersercurity forum this week with such speakers as AVG CEO Gary Kovacs, security reporter Brian Krebs and Trend Micro Chief Cybersecurity Officer Tom Kellerman. And with RSA Conference 2016 just around the corner, it may not have made sense for infosec companies to spend more time and money exhibiting at CES.

Still, a number of major tech manufacturers have made CES their launch pad for enterprise-focused offerings in the past, and that trend continued this year (just looked at how IBM promoted its Watson technology and PC makers like Dell, HP and Lenovo pushed their enterprise client devices). It seems like a missed opportunity for security vendors to cross over into such a high-profile event and promote the benefits of good infosec hygiene, to say nothing of the tech giants that were actually at the show that said virtually nothing about infosec (Hello, Intel).

I didn’t attend the show this year, and I’m thankful I didn’t have to make the laborious trip and daily grind that CES requires. But I would have gladly made the sacrifice just to see a few companies make serious attempts to put sound infosec technology in front of 150,000-plus people. But if we can’t get a stronger, more serious security presence at the biggest technology show in the world, then we’re doomed.


April 3, 2015  8:03 PM

The transaction that lasts forever

Robert Richardson Robert Richardson Profile: Robert Richardson
Security

Whether or not you think Bitcoin has a future, it has a couple of very interesting technological elements that will probably have a life of their own. The aspect that everyone talks about is that Bitcoins derive their value by dint of being “mined.” It takes time, computational power, and a pretty hefty electrical bill to mine Bitcoins in any significant amount and it’s the difficulty of getting new ones (just like it’s the difficulty of getting “new” gold out of the ground) that gives the coins value. But a second, equally interesting aspect of Bitcoins is that there is a distributed transaction ledger system where the transactional history of each Bitcoin is vouchsafed for as long as Bitcoin is around and being used.

The distributed ledger part of the Bitcoin world is implemented by way of a “blockchain.” Actually, the money itself is all but indistinguishable from the blockchain. That a Bitcoin exists at all is because a root for the coin (the “coinbase”) is created when the Bitcoin is created and all subsequent transactions with that coin are recorded in the blockchain. Each transaction is signed by the appropriate party’s private encryption key and then written into the current ten-megabyte block of the overall blockchain.

Without trying to explain the mechanics of the whole thing, the point is that the blockchain that fuels Bitcoin is there forever – it will be there until such time as the system completely fails. And even then, the distributed nature of blockchains means that copies will be around. Again, without digging into the details, there are other blockchains out there and it would be at least theoretically possible to store and vouchsafe a master copy of a failed blockchain if it ever came to that.

So the Bitcoin blockchain is immutable and distributed. That’s great for Bitcoins, but it’s also great for some innovative startups. I spoke with Peter Kirby, the president of Austin-based Factom, who explained why it was also great for general ledger storage beyond just keeping track of coins.

Kirby told me that “the simplest analogy of how the blockchain works is I send you something–I send you a bit of information. Everybody in the room agrees and we write that on a brick. We put that brick in a wall and then we stack ten thousand bricks on top of it. Then we move on to the next transaction and we put that brick in the wall and stack another ten thousand on top of that. The blocks are transaction blocks and there’s a whole lot of effort, we call it proof of work, stacked on top of it to make it really, really, really hard to change that block.

“What we said at Factom is, well, there’s a whole lot of stuff in the world that doesn’t look like money transactions. What if we could do a blockchain for data. And it turns out that having permanent, immutable, tamper-proof data is a really big deal for anybody who does record keeping, anybody who cares about their data being written once and never changed and never tampered with.”

The ledger entries that a bank or an insurance company might write to the Factom system could, in theory, be written directly to Bitcoin blocks, but there’s only so much data that can be stored in a bitcoin block and writing large volumes would be expensive and impractical. Instead, Factom commits a very small transaction (it’s the top hash of a Merkle tree of all the transactions from the past ten minutes, if you want to be really geeky about it) to the Bitcoin blockchain every ten minutes and then uses cryptographic hashes to tie all the Factom writes transacted within that ten minute period to the Bitcoin blockchain.

When a blockchain of entries (these are what get hashed in the Merkle tree) within the Factom system is initiated, Kirby says, “I get to set up what the rules are about what goes in that chain, what belongs in that chain and who’s allowed to do it. Now I’ve got a way to program what the auditors allow in that chain. And I can say, hey, it’s got to have a date formatted a certain way, it’s got to have an ID, and so on.”

Writing to the Factom store (inherently a write-once operation) requires an “entry credit,” which is essentially a software license allowing the access for writing one record. Credit entries, in turn, are converted from a unit called a Factoid, bought with Bitcoins. When sales of Factoids opened last week, buyers got 2000 Factoids to the Bitcoin. In the first days of the sale, more than a million-and-a-half Factoids were sold.

Kirby says that “one of the things that the development team did that ended up being really useful in the business world is they separated factoids and Bitcoin and all the cryptocurrencies, all the digital aspects of it, from the entry credit. So a big bank that’s not allowed to touch Bitcoins, that’s not allowed to hold Ripple, that’s not allowed to hold Ethereum, none of that stuff—their lawyers won’t let them, can still buy entry credits because they’re non-transferrable, they don’t have monetary value, it’s purely a software license at that point.”

It’s still early days. The initial sale of factoids just creates the endowment for a Factom Foundation–a full, working beta of Factom remains some months away. The Factom Foundation is a nonprofit that develops and maintains the open-source software—it separates the blockchain and its mechanics from the process of creating applications on top of those basic infrastructure elements. While the appeal of an immutable data ledger is clear for many essential business verticles, how Factom makes money as a commercial operation is still to be sorted out.


March 4, 2015  5:19 PM

Why Hillary can’t mail

Robert Richardson Robert Richardson Profile: Robert Richardson
Security

Reporting by The New York Times notwithstanding, it appears to this non-lawyer that Hillary Clinton probably didn’t break any laws by using a personal email account to conduct state business. But legal or not, what should probably bother us all is that we can’t help but assume that there’s no way that clintonemail.com was secure. Because that’s a key concern here–we just can’t believe that state secrets could be safe in such an account.

The Times broke the story by saying that Clinton had not only made extensive use of a personal email account while Secretary of State, but that “Federal regulations, since 2009, have required that all emails be preserved as part of an agency’s record keeping system.”

It is neither clear that the requirements regarding email accounts were in effect in 2009 (or indeed, during the entirety of Clinton’s tenure in the office), nor that she failed to preserve the email messages (when the State Department requested the email last year, Clinton provided 50,000 messages).

The relevant law here, it would appear, begins with a memorandum to update an existing (1950) law guiding records management. The memorandum was issued by President Obama in 2011.

This directive didn’t change the rules, it simply called for the rules to be re-examined and updated. The National Archives and Records Administration (NARA) subsequently published their findings and proposed changes in August 2013. At that point, Clinton had been out of office for more than half a year. The NARA findings weren’t actually signed into law until 2014.

Nothing in the new rules (or the old ones) prevents a cabinet member from using a personal email account to conduct business, provided the records are kept and the emails are turned over to that cabinet member’s agency for preservation. And it’s been done before: Colin Powell used personal email for state business.

Legal or not, commentary on the Internet has been quick to say that using personal email is no way to treat sensitive national business. As a practical matter, that seems irrefutably true, though whether it’s equally true that the State Department’s email system is secure is, I’d say, an open question. Certainly their system for diplomatic cables had its rough edges.

But why is it so obvious or inevitable that a personal mail account is insecure? Email is, frankly, a pretty straightforward system. And yes, certificate management is a tricky subject that complicates encryption, but I see no reason why major providers of personal accounts couldn’t issue basic certifications as part of creating an account. This wouldn’t mean that messages coming from that account weren’t fraudulent (because the real identity of the account holder would presumably remain unverified), but it would mean that if I sent you a message encrypted with those keys, nobody else would be able to read it, give or take the NSA.

Hillary’s team didn’t lack for means and I suspect she has some pretty sharp people handling her IT. But somehow we’ve been concerned about computer security for a quarter century and we have no faith at all that she couldn’t set up her own secure email. Now there’s something that ought to be against the law.


February 18, 2015  5:01 PM

When is an ISAC not an ISAC?

Robert Richardson Robert Richardson Profile: Robert Richardson
Security

A lot of what went on at the White House Summit on Cybersecurity and Consumer Protection, held at  Stanford University last week was for show — a reaction in particular to the attacks allegedly carried out by North Korea against Sony pictures. Like any live event, was also clearly some desire to get lots of the right people in the same room news reports pointed out that several of the right people, including the CEOs from Google and accept checks, opted out.

But this was also an event where the President took the time to show up and deliver a speech. Furthermore, President Obama made a point of publicly signing an executive directive, creating an air of something happening. The sound bite for what was going on, the way that the broad market media covered it, was that this directive encouraged sharing of cyber security threat information between the government and the private sector.

It’s worth noting that most of what the order calls for already exists in one form or another.

The organization traditionally tasked with combating cybercrime as the FBI, though the DHS increasingly seems to think it’s their problem, or least that it’s their problem to detect incipient attacks and help build up private-sector defenses (since most of the infrastructure that makes up the Internet is in private hands). Presumably it’s still the FBI (and local law enforcement) that crashes through a hacker’s door and impounds their electronics before they can be wiped.

The FBI funded a nonprofit organization, InfraGard, to link US businesses to the FBI all the way back in 1996. Aside from the FBI efforts to foster cooperation, there are a number of information sharing and analysis centers (ISACs) for different industry verticals that were funded as a result of a presidential directive issued by Bill Clinton in 1998.

Since ISACs are still motoring right along, you could be forgiven if you found yourself wondering about the difference between an ISAC and an ISAO (information sharing and analysis organization), which is what Obama’s directive calls for.

As it turns out, there may well be no difference between an ISAC and ISAO, according to a fact sheet that the White House published alongside the directive. ISACs can be ISAOs, though they may have to follow somewhat different rules if they are, insofar as the directive also calls for a nonprofit agency to create a “common set of voluntary standards for ISAOs.”

Perhaps the key element lurking in the directive is the idea that this network of ISAOs, connecting to the National Cyber Security and Communications Integration Center (NCCIC) to foster public/private sharing, creates a framework that could serve as a reporting channel companies could use to gain protection from liability when reporting security incidents. This idea of liability protection for companies that share comes from legislation proposed by the President in January. It’s unclear whether Congress or the American public as much stomach for letting corporate America off the hook for leaving their barn doors open.

For the time being, though, just remember that an ISAC and and ISAO are probably the same thing. It’s just that now there’s going to be a whole lot more sharing going on for reasons that, well, aren’t entirely clear.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: