Aite Group, a Boston-based research and advisory firm, on Wednesday issued a report with some interesting findings on what folks in the industry think it will take to secure payment cards. Respondents to a survey the firm conducted at the MasterCard Risk Symposium in Miami last month expect it will cost around $100 billion to fix card security in the U.S. Sixty-seven percent of survey participants expect card issuers to foot that whopping bill. Now, the report is based on a rather small sample — 29 people — but it carries weight with most of those people heads of risk management for issuing banks or payment processors.
So exactly what will it take to improve card security? Ninety-two percent of those surveyed by Aite Group believe end-to-end encryption of the card network to have a high impact in reducing card fraud losses within the next three years, according to the report. More than two-thirds of respondents see DLP technologies as helping to reduce card fraud. Fewer see a move to EMV architecture (an industry standard for chip-based payment cards) as having a big impact, but Aite Group researchers figure that may just be because most don’t see it happening very soon in the U.S. Those surveyed said the decision to shift to EMV in the U.S. is likely at least five years away and 36% don’t believe it will ever happen.
If a major piece of your security strategy revolves around employee training, the following video might be a major setback. Many security pros pride themselves on the amount of training they give their employees. But I wonder, is it all for naught?
A Google employee took a camera and microphone onto the streets of New York City to find out if non-techies knew what a browser is and the results were astounding. Less than 8% of those interviewed knew. And these guys don’t reside in an assisted living facility or a 55 and over community. Many of them could have Facebook accounts and even Twitter handles.
After watching the following video, I wonder, how would I begin a security training program if many of my employees don’t know what a browser is? Phishing sounds like a foreign language and malware sounds like a bad word. Maybe the next generation will have a better understanding. But how long can we wait?
Could flaws in social networks send the Internet spiraling out of control?
A flaw discovered in URL shortener Cligs (Cli.gs) last weekend demonstrates the fragility of the social networking ecosystem and how potentially dangerous it could be.
Cligs competes against TinyURL and Bit.ly, which dominate link shortening on Twitter. It is recognized as the 4th most used link shortener on Twitter. On Monday, Cligs acknowledged the flaw, calling it a security hole in Cligs’ editing functionality.
The attack edited most URLs on Cligs to point to a single URL hosted on freedomblogging.com. I’ve identified the hole and disabled all cligs editing for now and I’m restoring the URLs back to their original destination states.
Lucky for Cligs that whoever discovered the gaping hole only forwarded to a story on freedomblogging.com and not a porn website or attack webpage. According to the blog post 2.2 million URLs were affected.
Phishing attempts (Twishing), Tweetspam and even Twitter worms are being tracked by the major security vendors. Sammy Chu of Symantec Security Response today said the vendor has detected fake Twitter invitations that carry a mass-mailing and malicious worm. The messages appear as if they have been sent from a Twitter account.
This is all very close to spiraling out of control. Attackers are latching on to Twitter, MySpace, Facebook and others and using them to spread malware and harvest data. In a recent interview I had with security expert Lenny Zeltser, he said these short bursts of information – 180 characters on Twitter – alone doesn’t raise any eyebrows. But together with hundreds and in some cases thousands of other posts, the data could be used in a social engineering attack and could in fact harm businesses.
What can be done? To avoid being duped by malicious URL shortening links, Graham Cluley a security consultant with UK-based security vendor Sophos, who was the first to blog about the Clig hack, urges people to run a plug-in that will expand shortened URLs before they are clicked.
But we can’t rely on the public to take action. And they shouldn’t have to. It probably would be difficult for any group or association to take the lead on ensuring the security of social networks, but these organizations may benefit by joining forces in some sort of social network cabal to hash out standards around security and privacy issues.
The good news is that security researchers seem to be on top of the threats and the alarm is being sounded. But why is it taking a group of concerned security researchers and experts to get Google to better secure its Web applications? Who inside the search engine giant or any of these websites are weighing the risks and deciding to let the dice roll on security?
Unfortunately it may take catastrophic event to get any of the social media giants to take action. They owe it to their millions of users to take action and it may be the most prudent approach to ensuring their longevity on the Web.
Now go and listen to this interview with Lenny Zeltser on social networking woes:
In the race to be first, some information sources reprinted a forum post boasting of hacking into T-Mobile servers. In this case it appears to be the media that got pwned.
T-Mobile was put on the hot seat this week after an anonymous person posted a message on a hacker forum boasting of hacking into T-Mobile’s servers, stealing mountains of data, including customer records, account information and T-Mobile proprietary data.
The frivolous poster was seeking money and sought only serious inquiries to those willing to shell out cash for the supposedly stolen information. Several bloggers immediately jumped on the post followed by several publications. With little information, the brief linked to the anonymous post with headlines immediately warning of the next big breach.
The message was posted on Full Disclosure, a forum that has had questionable postings in the past. It showed information on T-Mobile’s various systems, including IP addresses of various servers and enterprise systems. T-Mobile quickly responded to the reports, conducted its own investigation and within a few days, issued several statements with the final one calling the original post unfounded.
“Following a recent online posting that someone allegedly accessed T-Mobile servers, the company is conducting a thorough investigation and at this time has found no evidence that customer information, or other company information, has been compromised,” according to a revised T-Mobile statement.
In case the statement wasn’t clear enough, T-Mobile broke it down into bullet points. There was “no hack or breach of security.” Meanwhile an investigation continues into how the document of T-Mobile server information was obtained.
While the post must have given T-Mobile officials a scare, it is unlikely that a hacker broke in and stole sensitive data, said Alex Rothaker, research and development manager at Application Security Inc. who leads that firm’s Team SHATTER (Security Heuristics of Application Testing Technology for Enterprise Research) organization. Rothaker said the data on the company’s servers may have come from an insider or someone who worked on T-Mobile’s systems.
“Something as simple as Nmap can give you a lot of that information,” Rothaker said, referring to the free vulnerability scanning tool. “By itself [the information] is not a total breach … This could truly just be somebody playing a prank or tying to make a name for themselves.”
Rothaker said T-Mobile is likely doing a deep analysis of its server logs to try and find any anomalies. The lesson for other companies is to ensure that activity monitoring tools are in place. Access controls should be limited on databases and servers to limit the access to confidential data.
Don’t get me wrong. This wasn’t a failure on a grand scale. Some organizations got the story right, explaining that the post could be frivolous and focusing more on the fact that T-Mobile has initiated an investigation. In any case, T-Mobile officials need to take every case like this seriously. But I hope this issue serves as a reminder for reporters to take a deep breath, confirm information and not rush to post a story for the almighty page view monster without doing a little follow-up work. We’re forgetting some traditional journalistic principles. We need to take a heavy dose of skepticism, especially in the cybersecurity industry where much of the information could be potentially damaging to individuals and companies.
In the race to be first online, I often wonder if we’re driving our journalistic principles into the ground, shredding them to serve up a piece of content that ultimately serves no purpose except to gain as many views as possible. Reporters are pitted against bloggers, many of whom have no formal background or knowledge of journalistic ethics. Ultimately speed does a disservice to the public.
Google maps integration looks cool, but in this tough global economy are software vendors going to spend the money needed to pursue offenders?
V.i. Laboratories, Inc., is adding Google maps to its CodeArmor Intelligence piracy threat software. Independent software vendors can use the company’s software by integrating it into the software release process. If the software is used without a license, a hidden software algorithm silently phones home, telling an ISV the location and possible business profile of the offender. It is a cloud-based service and uses Salesforce.com platform to provide a dashboard of reporting data, legal information and product management tools.
In May, a report from the Business Software Alliance (BSA) and research firm IDC found losses to piracy grew by 11% to $53 billion. Dollar losses from piracy in the United States total $9.1 billion, according to the report. Companies that pirate software deserve to be detected and held accountable, but this software appears to go one step further. Instead of treating offenders as criminals, it turns a piracy detection as a lead giving independent software vendors the ability to recover revenue on their own, through channel partners or seek legal action.
The software was first released in August 2008. While most software vendors incorporate a license activation key or some other form of registration to turn on all the features of the software, Vi Labs says it’s just not enough to keep people from using unlicensed versions.
“Currently, ISVs rely on licensing, activation or home-grown approaches to gather data, which are easily detected, bypassed or disabled by piracy groups. These home grown systems also lack filters, advance reporting processes, platform support, and are unable to organize and report the information in a way that can be used to develop a piracy lead.”
The Google maps integration is added through a reporting plug-in for the Salesforce.com platform. Users of the reporting application can look at the Google map (see below) and at a glance get a relative location of suspected software pirates.
It is unclear how many small independent software vendors want to really march down this road. The software is not inexpensive at a minimum price of about $50,000, but if companies can detect pirates and reap lost revenue the investment should be made up. Of course, legal action itself is a cost factor, so the costs of going after software pirates could increase significantly if the offending company doesn’t cooperate. I suspect most firms treat software piracy as a write-off at the end of the day, to avoid legal costs.
In these tough economic times, struggling companies may be more inclined to use software without a license. But at the same time, how do you recoup lost license revenue from a struggling offender with no money?
More than 80% of financial-services managers said they expect ATM/debit card fraud attempts to increase this year, a survey finds.
A recent survey by Actimize has some noteworthy findings, once you get past the parts that are geared to promote the vendor’s antifraud and risk management software.
Of the 113 financial-services managers polled (albeit, not a very big sample), 40% said they experienced double-digit ATM/debit fraud claims in 2008 compared to 2007. More than a whopping 80% said they expect ATM/debit card fraud attempts to increase this year, and almost 35% expect them to increase between 10% and 14%. Survey respondents represented retail banking, card issuers and payment processors.
More than 55% of the respondents predict U.S. card fraud to increase when Canada adopts chip and PIN, which Actimize said is expected to “reach critical mass” by 2010. Almost half said they expect fraud perpetrated by customers themselves — not outsiders — to increase this year.
Actimize’s survey also asked a lot of questions about the impact of mass compromises of payment card data, such as the Heartland breach. Such breaches impact financial firms in three main areas, said Jasbir Anand, fraud product manager at Actimize: overall costs, call center volume and a decrease in customer confidence.
Forty-eight percent of those surveyed said less than 1% of compromised accounts actually experience fraud and almost 15% said of the cards they reissued after a mass breach, 20% were for accounts that were unaffected by actual fraud. The cost of reissuing a payment card can range from $3.50 to $30, Anand said, making the costs of reissuing cards out of proportion to actual fraud losses.
I know some credit unions, in the wake of the Heartland breach, acknowledged that reissuing cards was costly, but they also said it was the right thing to do for their customers. A spokesperson at Washington State Employees Credit Union, which had to reissue about 4,000 affected debit and credit cards, said it wasn’t acceptable to see if something happened to the cards before reissuing them.
When was the last time you considered the state of your vendor relationship? Are they doing anything behind your back?
Google recently presented the results of its study touting that users of its Chrome browser are far more likely to have the latest version installed, because Chrome includes a silent update feature that automatically checks and installs the latest version with virtually no user interaction.
Software updates have become ubiquitous with all applications, regardless of their purpose. Sometimes the user must check for a new version, but often an automated process checks for an available update and then prompts the user to approve its installation.
I must admit that like many users, when I am moving quickly on a task, I’ll sometimes delay an application update for another time. But keeping that update process silent, without the user’s knowledge, strikes me as putting security ahead of the user. If I want to surf the Web without antivirus protection, I will do so. If I want to remain on version 1.x instead of 1.5, I want the ability to have that choice. When was the last time you got into an automobile and an automatic seat belt swung into place? Admit it, the auto industry caught on. Even though seat belts could save a customer’s life, automatic seat belts are a thing of the past. They were too intrusive, resulted in less choice for the driver and passenger, and ultimately, I bet they hurt sales.
Mozilla’s Johnathan Nightingale got it right when he said Mozilla prides itself on giving its users information. “We make certain choices, like telling users when security updates happen, and not automatically upgrading users to new ‘major’ versions … because we think it’s important to give our users that information and choice,” he said, explaining his take on the Google study.
Software as a Service and cloud computing services could dramatically change the discussion around patching. But perhaps more importantly are the questions that remain unanswered. Marcus Ranum, CTO of Tenable Network Security Inc., asked the following two questions:
- Why are we running software that is so bad it constantly needs patching?
- Since the “security researchers” have been saying for 15+ years that their bug-hunting activities are part of “making software better,” can we declare that effort to be a failure, yet?
It’s possible that if the industry starts to adequately address the issues within the software development lifecycle, the patching discussion will become a moot point. Bruce Schneier said something several times at the 2009 RSA Conference that stuck in my mind: Cloud computing is about trust. Do you trust your vendor? I suspect we are trusting our software and hardware vendors to a certain extent. By downloading a piece of software or buying an electronic device, we are engaging in a relationship. The fact is, by making software updates silent, the vendor is doing something behind our back. It’s something that begins to question our relationship. Isn’t that when relationships have a tendency to fail?
For now, I’ll happily continue to put off my software updates until they’re convenient for me. And yes. I wear a seatbelt.
Feeling stuffed, sluggish? Oh, it’s not you? It’s your PC suffering from a bad case of AV bloat. How many thousands of antimalware definitions can it take? How many updates? (Remember when your AV vendor recommended downloading updates at least once a week — or was it even once a month?)
Small wonder antimalware vendors are seriously looking to cloud-based detection, taking the burden off your poor laptop’s memory, CPU and grinding hard drive.
The latest idea, coming from Panda Security, is a free thin client product, which analyzes potential malware on execution, not on the PC, but in the cloud, where the resources of PandaLabs Collective Intelligence determines whether it is malicious or benign and directs the client to allow or block execution accordingly.
“It’s getting more and more cumbersome to deal with large signature files and pushing those out to everybody,” said Forrester analyst Jonathan Penn. “We’ve seen the hockey stick graphs with thousand s of new virus strains a month. Pushing into cloud instead — assuming some level of network connectivity — makes a lot of sense
The cloud approach is not unique to Panda. Most of the leading AV vendors have some similar component: If the desktop engine — using whatever combination of traditional signatures, behavioral analysis, host-based intrusion prevention, application control, etc. — encounters a file it can’t assess, it ships its telltale traits in some sort of hash off to the Big Lab in the Sky for analysis by the vendor’s analog to Panda’s Collective Intelligence.
The cloud’s capacity — unlike your PC — is unlimited.
But the unique and really intriguing aspect of Panda Cloud Antivirus, released in beta this week, is the thin client aspect. Users install the client (you have to uninstall your current AV, which probably rules out your corporate laptop as a test machine), and, Panda tells us, you’re protected in real-time.
It’s not clear where Panda plans to go with this eventually — they’re holding that close to the very least, Cloud Antivirus will increase the flow of potential malware samples to their cloud-based detection, improving its effectiveness. The target community, for now, are sharp end users, including IT and security professionals, who can give them some significant feedback.
(I’ll nervously, at first, run it on my home PC and back it up with Spybot and Malware Bytes Antimalware on-demand scans to assure myself. I expect serious security people, not journalist-poseurs like me, will get deep under the hood to see what’s really happening on their test computers.)
“Panda recognizes they can benefit from a broad consumer footprint,” said Penn. “Consumer PCs are kind of the front line in the fight against malware. They’re going to detect things first, they’re more likely to be the target of attack. More attacks will get actually through to them.”
Panda said Cloud Antivirus will utilize a third of the RAM of traditional desktop of products and have about half the average performance impact.
The thin client notion is not unique to Panda, though it’s arguably taking the lead among vendors. McAfee has a thin client product, VirusScan TC (ThinClient), which is pitched as a small-footprint, low-bandwidth alternative, especially for remote users on slow connections.
And, last September, researchers at the University of Michigan, Ann Arbor, proposed a service provider/network-based approach using a thin client and multiple detection engines (“Rethinking Antivirus: Executable Analysis in the Network Cloud”). They used a thin client to ship thousands of malware samples through eight AV products and two behavioral analysis tools. The individual AV products’ detection rate ranged from about 55% to 87%, but the combination of all detected more than 96% of all the malware.
Using a bunch of different AV engines may not be a practical solution, but the thin client model is valid, especially when one considers the constant flow of information into the cloud and the resources any given vendor can throw at the problem.
It’s not exactly a surprise that LogLogic acquired Exaprotect. The two partnered up in February to add Exprotect’s SEM engine as a module riding atop LogLogic’s log management/analysis platform.
The pending deal, announced Wednesday at RSA, is something of an indication that the log management and SIM/SEM/SIEM markets are becoming too closely integrated to distinguish. (Pick your acronym. At RSA this week, Forrester’s John Kindervag suggested “SIRS” — Security Information Reporting System, suggesting that these tools’ primary value was in reporting and compliance, rather than security).
In the end it’s all about collecting and analyzing information analysts can use for compliance, operational efficiency, forensics, and, maybe, security.
Regulatory compliance, particularly PCI, has driven sales of both log management and SIEM, transforming log management from a niche market to something of a must-have. Major SIEM vendors like ArcSight, seeing these hungry upstarts doing well, were quick to spin off separate log management products or modules to get a piece of the action.
Meanwhile, log management vendors have had some SIEM-like capability, a sort of SIEM Light. It makes sense that LogLogic is building on its success to provide a fuller package. Along with the SEM offering, the company announced a database monitoring and auditing module (partnering with an unnamed DB monitoring partner) and Compliance Manager, automating compliance approval workflows and review tracking.
The Exaprotect acquisition also brings in Solsoft Change Manager, providing configuration management capabilities, which will round out the LogLogic package nicely for both compliance and operational control once the products are integrated.
These days, you can’t log onto Twitter or do a Google search without crashing headfirst into something information security related. Security pros have embraced social networking in a big way, and they’re contributing a lot more to the blogosphere and Twitter arena than updates on where they’re having lunch.
Any of you who contribute or follow the active members of the security blogosphere probably know of the Security Bloggers Network. The network generally meets face-to-face at events such as RSA with a get-together known as the Security Blogger Meetup. Last night’s meetup featured the first presentation of the Social Security Awards, which recognized the best security blogs and podcasts. Alan Shimel of StillSecure, Rich Mogull of Securosis and Martin McKeay, who hosts the Network Security Podcast with Mogull, hosted the awards portion of the night; Jennifer Leggio, a longtime journalist and social media blogger, did a lot of the legwork to organize the event. A panel of journalists did the judging — and yes, a good time was had by all.
Winners were recognized in five categories: