Don’t be alarmed, I am still alive! After a long absence I’m back after an unofficial hiatus. Past six months have been particular busy and it’s been awhile, but that’s always the risk of writing a blog when you have a full time job, a life and a family. So, I’m back and better than ever, well at least as good as ever…
So what’s been going on:
The Obama administration identifies that cyber security is a national security priority for the US which promptly means that two (Melissa Hathaway, the top White House aide for cybersecurity & Mischel Kwon, head of the US Department of Homeland Security’s Computer Emergency Readiness Team) key personnel resign;
Hot footed after the US efforts UK government decides it needs its own central cyber security agency which will be staffed by “slightly naughty boys”;
The fate of Gary McKinnon is finally decided and he’s extradited to the US. Who knows what he’ll really face;
The Russia-Georgia conflict blamed for Twitter and Facebook outages but was apparently launched by Russian crime gangs. The distinction and blend between traditionally pigeon holed threats has finally broken down.
So, anything else important happen while I’ve been away?
I’m currently recruiting a Security Director to replace me as I move on to pastures new. I must admit to being wholly underwhelmed by many of the CVs that have come my way and also rather upset by the number of applicants currently out of work. Anyone who thinks information security is a recession proof career is wrong because around half of the CVs received are from individuals made redundant from their previous jobs.
The other disappointing thing is the number of people I’m seeing who are great at writing policy and delegating jobs to third parties but have lost the hands-on technical skills (if they ever had them). From my perspective, the ability to read and interpret a network scan, review an architecture design or read a log file, identify the important issues (as opposed to the trivial), and describe why the issues are important and the work that needs to be done to fix them is bread and butter stuff. Not only that, but it’s the fun part of the job – it’s the bit we should all really want to be doing! Writing a policy document is important, but it’s hardly something to be proud of being able to do. Bring me candidates who still have some security mojo!
At a U.S. House of Representatives hearing yesterday, federal lawmakers
and representatives of the retail industry challenged the effectiveness
of the PCI rules, which are formally known as the Payment Card Industry Data Security Standard
(PCI DSS). They claimed that the standard, which was created by the
major credit card companies for use by all organizations that accept
credit and debit card transactions, is overly complex and has done
little to stop payment card data thefts and fraud.
I disagree that the standard is overly complex – in fact most of it is straightforward, common sense information security. The reason it has proved to be ineffective is because organisations focus on ticking the compliance boxes rather than taking the holistic approach to security that’s needed. There’s enough ranting on this subject elsewhere – the best being on Anton Chuvakin‘s blog – and I have little to add.
An interesting rant on Information Security from Marcus Ranum online here. I picked up on the following quote:
The security team explained why it was a bad idea; in fact they wrote a brilliantly clear, incisive report that definitively framed the problem. So the executive asked the web design team, who declared it a great idea and “highly do-able” and implemented a prototype. Months later, the “whiners” in the security team were presented with a fait accompli in the form of “we’re ready to go live with this, would you like to review the security?
Sounds familiar. However, the important point is that security (or lack of it) is not, and should not be, the sole deciding factor in determining whether or not something gets done. The point is that the risks are known and reported. Management can then use it as a factor in their decision making process. If security was the sole deciding factor then the business would have collapsed a long time ago and we’d all still be using type-writers and chinagraphs.
Good leaders take risks. We’d like assurance that they are actually balancing the risks and benefits before making a decision rather than just running on gut instinct but sometimes they will make a wrong decision and security factors might sometimes mean the project fails or suffers an incident. However, without risk takers, there would be no innovation and no business growth.
It can sometimes be infuriating to report on risks and see them being taken anyway, but so long as you have identified what those risks are and reported them appropriately then it’s job done.
Loads of coverage of the GhostNet story at the weekend. The FT, NY Times, Sydney Morning Herald and BBC all highlight the Munk Centre for International Studies report on the cyber ‘spying’ network which has compromised government computer networks all around the world.
For those in the information security community it should come as no surprise that there are serious and organised individuals and groups using coordinated computer resources to deliberately and maliciously infiltrate attractive target networks. E-mail based threats are not new and have been the modus operandi for a whole bunch of people for at least the last five years or so. Back in 2005 Israel’s hi-tech business sector was stunned by a major computer espionage scandal involving targeted trojan e-mail attacks. The anatomy of attacks has changed, accept it and lets move on.
Munk’s report heavily hints at Chinese state sponsorship but there’s no conclusive evidence at all and a causal relationship is draw between the physical location of the command and control infrastructure and the perpetrators of the activity. In this case a Chinese computer is implicated but that doesn’t mean that China itself is the sponsor of GhostNet.
Heaven only knows how many unprotected, unpatched, poorly configured and poorly managed computer networks using unlicensed O/S there are in greater China. It’s an easy and rich play ground for international organised e-crime to take advantage of inadequately protected computers to create multiple platforms for their attacks. Shooting fish in a barrel comes to mind.
This is a fast and highly dynamic field and pinning the blame on a nation is, IMHO, too simplistic and naive. We’re unlikely to ever know who the real source of this activity is and let’s just accept that and get on with more valuable ways of using our time and attention. Instead lets focus our energy on raising standards of computing through education and awareness about the dangers everyone faces from vulnerable, poorly protected or poorly managed computer networks. We’re all in this together!
Does anyone know of a smart phone or mobile device that enforces account and privilege separation?
It’s been a long held good practice to run user accounts with least level of system privilege and only use admin accounts when you absolutely have too. The obvious danger is that if you’re always operating with elevated admin rights and if your device is compromised then the attacker runs with your admin rights. This is far from a perfect situation and can easily lead to security meltdown.
All the popular mobile devices and smart phones I’m aware of operate with full admin rights all the time which seems like security madness to me. Code signing of downloaded apps will help to establish some level of trust in the source on content, but all bets are off with content based attacks arriving via e-mail.
I love system functionality, it’s a great thing. It brings a rich and dynamic user experience or empowerment through seamless processes to get things done. Whether it be business functionality or technical functionality, we now have more system functionality at our finger tips than we’ve ever had.
In the 1980’s BT offered a dial in bulletin board type service known as Prestel. It was a simple service based on Viewdata technology with a private electronic mail capability and was my first experience of being in an online networked environment. Prestel hosted the UK’s first home online banking service (Homelink). It was a basic service to say the least. You could view your account balance and pay specific utility bills but that was about it. At the time it was revolutionary, but today limiting your service to these features would be archaic.
Lets jump back to today when Internet banking is the norm and the range of banking functionality is enormous (I can wire transfer money anywhere in the World from anywhere in the World without delay, interruption and at minimal cost) and it starts to become obvious that retail banks, for a number of reasons, have dismantled their internal processes and controls and have brought them out of the back office and have put them right into every living room, coffee shop and street corner (a scale issue).
There’s no doubt that there are real rewards with giving you access to all this extra functionality and I’m a real fan of Internet banking myself. However, the problem comes when/if your account becomes compromised. Back in 1984 I really wasn’t bothered as Homelink really wouldn’t let you do that much therefore the damage was limited and the impact minimal. In today’s environment all bets are off and your account easily plundered or your Visa charged before you know what’s hit you.
And therein lies the rub. The more functionality we provide in information systems the more opportunity we create for nefarious or malicious activity to flourish. Add in the scale dimension and the problems start to be compounded even further. When I started this thread I argued that you could only ever have two elements from scale, functionality, and security and I still believe this to be true.
If this is the case how can we approach the security element in order to bring this vital element into the picture? If we can’t provide the right amount of security into highly scaled, functionally rich systems then how can the general public trust and embrace them? People will readily adopt new systems when they see a clear reward for changing their behaviour but will not go past the point where the risk outweighs the attached rewards.
My last post in this series will look at what we really mean, and can expect of security in this environment and what we as information security professionals and our Boards of Directors need to clearly understand about these realities.
(Postscript: Thinking about my first adventures with Prestel has brought back loads of memories. For those of us in the UK this cutting edge service significantly changed the IT security legal landscape. After Prince Philip’s Prestel e-mail account was compromised the Computer Misuse Act 1990 was introduced and I guess things have gone downhill ever since 😉
Few of my blogs have generated so much venom to be thrown in my direction than this one from last week. One blogger from America has gone so far as to write two very lengthy pieces in response while the highly respected security guru and fellow blogger, David Lacey, referred to it as being drivel. Another public commentor calls it trite.
I was well aware that my remarks about the usefulness of security awareness programs and risk models in particular would raise some eyebrows. However, I welcome the debate: we shouldn’t be shy to challenge the accepted norms because there’s plenty of evidence around that they frequently don’t work.
Trite or drivel it might be….I actually started off with a list of ten!
Another third party vendor failing to implement decent security around sensitive data.
You’ve got to check out your vendors! The vendor might be at fault, but it’s your data, and your liability.
On Monday I remarked on the BBC Click botnet investigation. I slightly regret my post because, in fact, I think they did a great job in bringing to life the potency of botnets. Legalities aside, let’s focus on the fact that it only took 60 PCs to cause a denial of service situation. That’s very disturbing and we all need to sit up and consider the consequences of that.
I was chatting with the CISO of an investment bank earlier today. He was wondering whether or not we should have in place a legal framework that would allow “researchers” a better way to test system security without fear of being accused under the Computer Misuse Act. It’s dangerous territory but I take his point. If somebody discovered a gaping hole in my own organisations’ network security then I’d be grateful for the information. Many of the third parties I legitimately employ to do discovery work do little more than run Nessus and then post a report, so the hackers view would be invaluable. But where do you draw the line between “hacking” and “research” and what assurance can be gained from an unsolicited security report?