IT Compliance Advisor

November 17, 2011  9:31 PM

Coordinated Facebook spam attack raises eyebrows, alienates users

Ben Cole Ben Cole Profile: Ben Cole

It was the shot heard round the social media world: This week, a Facebook spam attack resulted in pornographic and violent images showing up on users’ news feeds. Facebook has always prided itself on avoiding such attacks, and this was a big one. There are predictions that the site will lose some of its more prudish users because of the attack, which could hurt the social media juggernaut’s business model.

During the “coordinated spam attack,” users were tricked into pasting and executing malicious JavaScript in their browser URL bar, causing them to share the content, according to a Facebook statement. Facebook is now in the process of identifying those responsible for the spam attack, has built security measures to shut down the malicious pages, and is working to educate users on how to protect themselves from similar spam attacks.

But who should really be held responsible for the Facebook spam attack? Do people using Facebook really not realize that they should avoid copying and pasting a suspicious-looking link from an unknown source into their browsers? I know a gift certificate to a themed chain restaurant is enticing, but come on. Facebook says it’s providing users with “educational checkpoints” to protect themselves. Is one of these points “Don’t be stupid?”

I think Helen A.S. Popkin said it best in the Technolog blog: “Viral scams persist on Facebook because Facebook users continue to click malicious links.” A study this week by the National Cyber Security Alliance and McAfee found that of 2,337 U.S. adults surveyed, 24% are not confident at all in their ability to use privacy and security account settings in their social networks. Another 15% of respondents have never checked their social networking privacy and security account settings and only 18% said the last time they checked their settings was in the last year.

These findings are just an example of the disconnect between the threats to everyday Internet users and what these users consider “safe and secure” Internet use. As more incidences like the Facebook spam attack occur, companies will no doubt try to comply with consumer protection rules and establish their own policies to protect customers. But perhaps users need to do a little more to protect themselves as well.

November 14, 2011  8:38 PM

Regulators renew focus on Facebook, consumer data protection practices

Ben Cole Ben Cole Profile: Ben Cole

A few months ago, it was Google in regulators’ crosshairs. In the past couple of weeks, however, it seems that Facebook is regulators’ new focus, as they push for consumer data protection.

Facebook is close to a settlement with the U.S. government over charges that it misled users about its use of their personal information, according to The Wall Street Journal. The settlement — currently waiting for Federal Trade Commission (FTC) approval — reportedly would require Facebook to submit privacy audits for 20 years and to obtain users’ consent before making “material retroactive changes” to its privacy policies.

The report comes as the FTC and other global regulators continue their consumer data protection efforts. In March Google agreed to adopt a privacy program (which also included 20 years of privacy audits) in response to charges that it deceived users and potentially violated user privacy when it launched the social networking service Buzz. And today the FTC announced that the Asia-Pacific Economic Cooperation forum has approved an initiative to create cross-border data privacy protection among APEC members. Companies that wish to participate in the APEC privacy system will undergo a third-party review and certification process that will examine their corporate privacy practices.

The New York Times reported last week that the European justice commissioner is planning to insert wording into a revision of the European Commission’s Data Protection Directive law that would require non-European Union companies to abide by Europe’s rules on data collection or face fines and prosecution. The move could create a global commerce dispute surrounding Internet privacy, the Times reported. Facebook is also being examined by Ireland, Germany, Sweden, Finland, Norway and Denmark for potential violations of consumer data protection regulations.

Speaking of consumer data protection in the U.K., there was another noteworthy news item from the past couple of weeks: The U.K. Parliament’s Justice Select Committee has suggested jail terms for violations of the country’s Data Protection Act. Although fines are used to punish breaches of U.K. data protection laws, they provide little deterrent when the financial gain exceeds the penalty, Sir Alan Beith, the committee’s chairman, said in a recent report. “Magistrates and judges need to be able to hand out custodial sentences when serious misuses of personal information come to light,” he added. “Parliament has provided that power, but ministers have not yet brought it into force — they must do so.”

Although it seems Facebook is the prime target in these consumer data protection inquiries, perhaps it’s being used as a very high-profile example. If companies see their own vulnerabilities in the lapses of one with seemingly endless resources, they might start taking a long look at their own consumer data protection practices. They probably will soon have to anyway, as regulators increase their vigilance.

November 7, 2011  7:01 PM

How architecture best practices can help develop smarter GRC patterns

Abowles Profile: Abowles

Early in my career I was influenced by the work of Christopher Alexander, an architecture professor at the University of California, Berkeley.  Alexander and his team researched and cataloged patterns representing building, city and community construction best practices that had evolved over a considerable period of time. I used their seminal work, A Pattern Language, to guide the construction of my own home, and many of their principles to teach software engineering as a discipline.

Alexander, et al., note that, “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

Each of the architectural patterns includes a picture and a paragraph explaining how it works in context. Architectural patterns don’t constrain or inhibit creativity as much as they free designers to focus on the differentiations that have the greatest impact on the end user.

Twenty years ago, I documented some of my thoughts on software development patterns in an article titled “Systems Design: Lessons from Architecture.” I have been recently writing about the relationship between enterprise risk management and sustainability, and it occurred to me that GRC managers could benefit from taking a pattern-based approach to their work — especially for organizing their teams and system architecture.

Patterns are like musical forms — there are infinite varieties and parts to be created, but the overall structure is known to “stand the test of time.” We already have well-established sets of controls for GRC, such as COBIT and ISACA’s Risk IT. These are all important, but not an alternative to patterns because their intent is to support auditing rather than to provide a creativity framework. Instead, patterns should complement controls.

A GRC pattern language, like a programming language or even a natural language, would be a shared resource to enable faster and more reliable enterprise system development. GRC patterns should include all the key constructs needed to ensure best governance and compliance practices (in this context, controls would be embedded in each pattern).

They also must be flexible. For example, with governance we know that it’s alright to have exceptions as long as there is a repeatable, auditable process for justifying and documenting them. Given the pace of technological advancement that drives business model changes, any pattern repository must allow for rapid changes, too.

I believe we need a GRC pattern guidebook, similar in spirit to Alexander’s work but one that leverages a broad community supported by collaboration tools and assembled by a flexible process. Changes in the environment may lead to the identification of new patterns based on analytics, and pattern retirement when conditions change is equally important. In other words, we need a Wiki to capture, catalogue, review and update patterns as a community.

With that in mind, SIG411 LLC is launching an open source patterns project that will include GRC patterns contributed by practitioners and academics who will be recognized for their contributions. The scope of the project is broader than GRC, as it will include patterns for all aspects of sustainable enterprises and societies. But given my personal interest in the intersection of enterprise risk management and sustainability, GRC will be an early focal point. I encourage all interested parties to get involved and contribute, as well as use, the patterns from this Wiki.

Adrian Bowles has more than 25 years of experience as an analyst, practitioner and academic in IT, with a focus on strategy and management. He is the founder of SIG411 LLC, a Westport, Conn.-based research and advisory firm. Write to him at

October 31, 2011  7:00 PM

Cybersecurity threats pose big problems for small businesses

Ben Cole Ben Cole Profile: Ben Cole

National Cybersecurity Awareness Month has drawn to a close, but it’s clear that much still needs to be done to protect information online. One recent survey has found that small businesses – which likely don’t have the resources to bounce back from a major data breach — could be particularly vulnerable to cybersecurity threats.

The online survey of 1,045 small business owners, sponsored by Symantec Corp. and the National Cyber Security Alliance, found that 70% have no formal Internet security policy for employees and that of those, 49% do not have even an informal policy. In addition, 45% of the small business owners surveyed said they do not provide Internet safety training to their employees.

These findings are in stark contrast to SMBs’ apparently false sense of security. Eighty-five percent of the survey respondents said they believe their company is safe from hackers, viruses, malware or a cybersecurity breach; and 69% agreed that Internet security is “critical to their business’s success.”

It’s clear that the survey respondents aren’t following the main theme of this year’s Cybersecurity Awareness Month: the importance of educating everyone and making them aware that they need to do their part to protect their information online.

Other survey highlights (or lowlights, as the case may be):

  • 56% of respondents have no Internet use policies to clarify which websites and Web services employees can use; 52% have a plan in place for keeping their business cybersecure.
  • 67% have become more dependent on the Internet in the last year; 66% depend on it for day-to-day operations.
  • 57% of respondents say a loss of Internet access for 48 hours would be disruptive to their business, and 76% say that most of their employees use the Internet daily.
  • 37% have an employee policy or guidelines in place for the remote use of company information on mobile devices, and 36% have a policy outlining employees’ acceptable use of social media.
  • 59% do not use multifactor authentication to access their networks.
  • 50% report they always wipe data off their machines completely before they dispose of them; 21% never do.

The survey also found that SMBs are woefully unprepared to react after a data breach. Forty percent of respondents said they don’t have a contingency plan outlining procedures for handling and reporting a data breach or loss of information.

Ignoring the problem of cybersecurity threats can be very costly. Data released by Symantec shows that 40% of all targeted cyberattacks are directed at companies with fewer than 500 employees. In 2010, the average annual cost of cyberattacks to SMBs was $188,242. Business Insider reported in September that approximately 60% of small businesses will close within six months of a cyberattack.

What is it going to take for these small businesses to realize the impact of cybersecurity threats? They need to realize that lax cybersecurity measures, combined with their sparse resources, make them particularly vulnerable. It might be costly and time-consuming to shore up online security, but these businesses need to take these threats seriously, before it’s too late.

October 18, 2011  1:32 PM

New attacks test Sony’s transparent approach to data security

Ben Cole Ben Cole Profile: Ben Cole

After hackers gained access to the personal information of more than 100 million user accounts last spring, Sony overhauled online security and created a chief information security officer (CISO) position. On Sept. 6, Philip Reitinger joined Sony as its senior vice president and CISO — and he’s already been busy.

In a post to the PlayStation blog last week, Reitinger said Sony detected attempts on Sony Entertainment Network (SEN), PlayStation Network (PSN) and Sony Online Entertainment (SOE) to test “a massive set” of sign-in IDs and passwords against the company’s network database. The attempts appeared to include a large amount of data obtained from one or more compromised lists from other companies, sites or other sources, Reitinger said.

“As a preventative measure, we are requiring secure password resets for those PSN/SEN accounts that had both a sign-in ID and password match through this attempt,” Reitinger wrote in the blog post.

Less than one-tenth of 1% of the PSN, SEN and SOE audiences may have been affected by the data security breach, and Reitinger assured users that credit card numbers were not at risk. This was a relatively low-risk data security breach, but perhaps Sony’s reaction was a case of lessons learned: After the April breach, Sony was criticized for waiting a week to notify customers that their personal information might have been compromised. In addition, it took more than two weeks to fully restore the network. Needless to say, Sony users (and federal regulators) were not impressed by what some viewed as a lackadaisical reaction.

There has been much public outcry over Sony’s data security breach, and those of other companies, in the past year. This likely influenced the SEC last week to mandate the “disclosure of timely, comprehensive and accurate information” surrounding cybersecurity risks.

Did Sony’s online security overhaul help detect this breach before it became another fiasco? Although critics have said Sony simply hired Reitinger as an insurance policy to pacify investors and customers after the April data security breach, he showed his value here. At least now the Sony brass and their customers have someone to go to for information about any further breaches — what happened, how it happened, how they are going to handle it in the future. (Unfortunately for Reitinger, it also gives them someone to blame.)

But if nothing else, the reaction to last week’s data security breach might be indicative of a new trend of taking a proactive approach and letting online customers know what they can do to protect themselves and their information.  Judging by the comments made to Reitinger’s blog post, people are mostly happy with Sony’s reaction to the potential data security breach. Many praised Reitinger and Sony for keeping them informed.

Perhaps Sony and companies like them have learned their lesson about the futility of trying to keep a breach out of the spotlight, and know now that transparency is the best course of action. If the SEC’s recent mandate is any indication, federal regulators and customers are going to be watching companies closely to ensure cybersecurity is kept above board.

October 17, 2011  7:10 PM

Seven common regulatory compliance requirement assumptions to avoid

Kevin Beaver Kevin Beaver Profile: Kevin Beaver

Compliance means different things to different people. Indeed, regulatory compliance requirements are — and should be — handled differently based on the unique needs of the business. The ugly reality is that there are so many assumptions being made about compliance that it often skews the perception of what’s really going on.

Here are what I believe to be the most dangerous assumptions we make about regulatory compliance requirements and how they can get us — and our businesses — into hot water:

1. We’re compliant, so our information is safe. The most common assumption is that compliance equals security. It doesn’t. Never has and never will. Your business may be “compliant” at the moment, but odds are you’ve still got tons of low-hanging fruit that needs to be fixed. It’s time to dig deeper.

2. Our lawyer is in charge, so all is well. Lawyers should have the final say-so, but they shouldn’t be calling all the shots. Compliance is much more complex than audit reports and contracts. There’s information risk assessments, vulnerability management, incident response, access controls, etc. All the right people across the board need to be involved throughout the compliance process.

3. It’s not worth the money to become — and stay — compliant. According to the Ponemon Institute, the cost of noncompliance is 2.65 times the cost of adhering to regulatory compliance requirements. Do what needs to be done, and you’ll save a tremendous amount of money and effort. As time goes on, you’re going to be forced into compliance eventually. Why not get started now?

4. We encrypt our PII — that’s the ultimate security control. Even though data is encrypted, there are numerous ways to exploit known flaws, especially if the encryption wasn’t properly implemented or isn’t being managed the way it needs to be. You need encryption — but don’t assume it’s working as intended.

5. Our tools are telling us that we’re compliant; enough said. Good network and security tools are essential for visibility and control, but you can never rely on them completely. Be it identity management, network monitoring, vulnerability management — you name it — canned reports from such tools most often do not reflect reality. You have to look closer and validate for yourself.

6. We’ve done everything required by the regulations, that’s all we need to do. Focusing on what’s required doesn’t mean you’ve covered all your bases. The minimum regulatory compliance requirements are often a baseline of suggestions, but it may not be what your business really needs. Furthermore, I can’t tell you how many times I’ve seen businesses “become” compliant without ever performing a single information risk assessment. You can’t possibly put the right security controls in place if you don’t even know what needs attention.

7. We had a breach and subsequent compliance sanctions, so we learned our lessons and are much more secure now. Humans often assume what other people are thinking and that others are taking care of what’s needed. A prime time for this to happen is after a breach occurs and we get back into our day-to-day work, then become complacent. Assuming that everything has been uncovered and fixed is a prime opportunity for people to let their guard down and for something else to go awry. Experiencing a data breach means you’ve got to up your game, big time — and stay on top of things without ever letting your guard down.

You cannot change what you tolerate. Fix your oversights and gaps surrounding regulatory compliance requirements now, before they bite when you’re least expecting it.

Kevin Beaver is an information security consultant and expert witness, as well as a seminar leader and keynote speaker at Atlanta-based Principle Logic LLC. Beaver has authored/co-authored eight books on information security, including The Practical Guide to HIPAA Privacy and Security Compliance and the newly updated Hacking For Dummies, 3rd edition. In addition, he’s the creator of the Security On Wheels information security audiobooks and blog.

October 10, 2011  6:56 PM

Paranoid Android: HTC mobile device security questioned by researchers

Ben Cole Ben Cole Profile: Ben Cole

October is National Cybersecurity Awareness Month, and the overarching theme this year is to spread awareness of every Internet user’s role in securing their information. In other words, YOU are the first line of defense in protecting your information, so pay attention to security vulnerabilities stemming from your devices.

But certainly not everyone who goes online is overly familiar with the persistent threats. Luckily, it appears some watchdogs are here to help. This was evident when the Android Police blog recently reported a “massive security vulnerability” in HTC’s Android devices.

Android Police researchers found that in recent updates to some of HTC’s devices, the company introduced a suite of logging tools designed to collect user information — way too much information, according to the researchers. Researchers found that on affected HTC devices, any application that requests a single Internet permission (normal for any app that connects to the Web or shows ads) can access:

· The list of user accounts, including email addresses and sync status for each.

· Last-known network and GPS locations, and a limited history of previous locations.

· Phone numbers from the phone log.

· SMS data, including phone numbers.

· System logs likely to include email addresses, phone numbers and other private info.

“If you, as a company, plant these information collectors on a device, you better be DAMN sure the information they collect is secured and only available to privileged services or the user, after opting in,” wrote Android Police blogger Artem Russakovskii when announcing the HTC device security vulnerability. “That is not the case.”

After the flaw was exposed by the Android Police, HTC confirmed that it found in its software a “vulnerability that could potentially be exploited by a malicious third-party application,” and that it was working on a fix. Customers will be notified of how to download and install the security fix, the company said. HTC also urged customers to use caution when downloading, using, installing and updating applications from untrusted sources.

At least HTC moved quickly to correct the problem and inform its customers of the vulnerabilities, right? Well, not so fast. After finding the security lapse, the Android Police contacted HTC on Sept. 24 and received no real response for five business days, after which the Android Police released the information to the public.

Perhaps HTC was waiting to tie in its response to the vulnerabilities with Cybersecurity Awareness Month.

The point to take from the story surrounding HTC mobile device security is that companies are not going to come out and announce when there is a huge risk to using their products — especially those designed for consumers. The problem is the average consumer is not going to know what to look for, and will trust that information is protected when using devices for everyday use.

As shown with the HTC mobile device security issue, this is not always the case. How many more security vulnerabilities are there in other mobile devices that have not been exposed yet? And it’s not just individual consumers that need to be concerned: The spread of personal devices (and their associated security risks) in the workplace make due diligence necessary. People can obviously no longer just assume that they’re protected.

October 3, 2011  6:18 PM

Communication is key to effective cybersecurity strategy

Ben Cole Ben Cole Profile: Ben Cole

October is National Cyber Security Awareness Month, and this year’s theme is meant to remind individuals of their role in securing information, as well as the devices and the networks they use. Failure to understand this relatively simple cybersecurity message can have embarrassing consequences, as banking giant The Goldman Sachs Group, Inc. learned earlier this week when hackers published the personal information of several employees, including CEO Lloyd Blankfein.

Goldman Sachs was not the only big name in the in cybersecurity news this week. After news surfaced that Facebook had been gathering information about the websites its users visited even after users logged out of the social network, two congressmen urged the Federal Trade Commission (FTC) to investigate the company’s practices.

In a letter to the FTC, Congressmen Edward J. Markey (D-Mass.) and Joe Barton (R-Texas) said tracking users’ behavior without their knowledge “raises serious privacy concerns.” Facebook says it is working to correct the matter, but Barton and Markey want the FTC to investigate and make sure the practice is stopped. Barton and Markey also urged the FTC to investigate the use of so-called “supercookies” that allow websites to capture personal data about consumers.

Daniel Conroy, CISO and global head of information security at BNY Mellon Corp., says organizations should make clear to employees their role in protecting their own — as well as the company’s — sensitive information. Conroy said protecting data starts with communicating to employees what is acceptable and what is not with regard to risk management. He suggests providing security awareness training to all employees.

“If employees don’t know what information is important to the company, how are they going to know what not to post?” Conroy asked during a presentation at the MIS Training Institute’s IT Governance, Risk and Compliance Summit in Boston last month.

Conroy focused on how the expansion of social media makes sensitive company information especially vulnerable — and noted that it’s important to establish a balance between satisfying business needs and mitigating risk when using such sites. He noted that avoiding things as simple as posting organizational charts at companies online could go a long way toward avoiding leaks of business info. Most importantly, he suggests companies anticipate the evolving risks as part of a cybersecurity strategy, and communicate these risks to all employees. Companies could go so far as to create a security awareness campaign using techniques such as posters, videos and email blasts to get the message out and encourage employees to participate.

The bottom line is that protecting information starts with the individual. With more people incorporating personal technology in their business activities, companies can be hurt when personal information is leaked. As a result, companies would be well served to show employees how they can protect themselves and the information they offer online … which will in turn help the protect the business.

September 26, 2011  7:17 PM

Lack of cloud computing standards causes concern as adoption spreads

Ben Cole Ben Cole Profile: Ben Cole

Recognizing the “significant opportunities” surrounding cloud computing, the Subcommittee on Technology and Innovation held a hearing last week to examine the benefits — and obstacles — of widespread cloud adoption. The hearing could be a first step to more exacting cloud computing standards.

Subcommittee members said cloud computing can provide users with increased computing capability, greater efficiency and lower energy and infrastructure costs. However, cybersecurity remains a major concern for many users, said Subcommittee Chairman Rep. Ben Quayle (R-Ariz.). Quayle pointed out that users must have confidence that their data and applications, as well as their privacy, will be protected. Quayle added that cloud service providers would need to offer users different tiers of security depending on the sensitivity of their data in order to alleviate these concerns.

Nick Combs, federal chief technology officer at EMC Corp., and Dr. Dan Reed, corporate vice president of the technology policy group at Microsoft, were among those testifying at the hearing. In response to Quayle’s concerns, Combs suggested cloud security be driven by a “flexible policy” aligned to the business or mission need, and that a common framework would be needed to ensure that cloud security policies are consistently applied. Reed added that clear policy goals surrounding cloud security are necessary, but regulators need to be careful to avoid rules that will hinder cloud innovation or quickly become outdated.

These cloud security concerns echoed statements recently made by Alan Barnes, director of risk and advisory at Services Assurant Inc., at a GRC training summit in Boston. Barnes noted that cloud computing creates additional third-party security risks, such as hacking, a lack of compliance standards and intellectual property vulnerabilities. Barnes added that the current lack of agreement on cloud computing standards ensures that cloud provider risk evaluation will remain inexact and inconvenient for the next several years.

The National Institute of Standards and Technology (NIST) is spearheading stakeholder efforts to develop cloud data security and interoperability standards, which witnesses at last week’s hearing said are critical to the cloud’s success.

“As an agency considers migrations to cloud computing, NIST must develop the appropriate consensus standards and guidelines to ensure a secure and trustworthy environment for federal information,” according to a statement from the Subcommittee on Technology and Innovation.

Developing such “consensus standards and guidelines” is an appropriate first step to alleviate concerns surrounding the mass migration to the cloud. But until these cloud computing standards are established and implemented, users need to remain cautious moving to the cloud.

September 19, 2011  4:08 PM

Lawmakers increase attention to online data security and privacy

Ben Cole Ben Cole Profile: Ben Cole

A few weeks ago in this space, I wondered if increased scrutiny of Google’s business practices was just the beginning of the federal government’s efforts to regulate the Internet. Judging by a handful of news stories and announcements last week, online data security and online privacy concerns have shot to the top of at least some lawmakers’ lists of concerns.

For starters, Sen. Richard Blumenthal (D-Conn.) introduced the Personal Data Protection and Breach Accountability Act of 2011. The legislation is designed to protect consumers’ personally identifiable information and improve online data security.

The bill would create a process for companies to establish appropriate online data security, and it would hold companies accountable for failing to comply with those plans. In what may be spurred by Sony’s slow response to a huge data breach earlier this year, Blumenthal’s bill also requires companies to promptly notify consumers after a breach has occurred, and to provide consumers with solutions to alleviate online security threats.

To help prevent future beaches, the bill encourages better information-sharing among federal agencies, law enforcement and the private sector to alert businesses of specific online security threats.

Also last week, an Op-Ed piece in The New York Times highlights an upcoming Supreme Court case that could have huge ramifications for online privacy concerns. But this time, it regards how much information the government should have access to.

The case, United States v. Antoine Jones, concerns a GPS device placed on the car of a suspected drug dealer without a warrant, which the man says was a violation of the Fourth Amendment.

“If the court rejects his logic and sides with those who maintain that we have no expectation of privacy in our public movements, surveillance is likely to expand, radically transforming our experience of both public and virtual spaces,” wrote Jeffrey Rosen, a law professor at George Washington University.

Rosen pointed out that technologies such as Facebook’s facial-recognition tool could be used by law enforcement to help identify criminals. Rosen also referenced a 2008 comment from a Google executive saying that, within a few years, public agencies and private companies could be asking Google to post live feeds from public and private surveillance cameras all around the world.

“If the feeds were linked and archived, anyone with a Web browser would be able to click on a picture of anyone on any monitored street and follow his movements,” Rosen wrote in The New York Times piece.

These news items were among a handful reporting on online data security regulations in the past week. Here are some others:

  • The Federal Trade Commission announced it is seeking public comment on proposed amendments to the Children’s Online Privacy Protection Rule, which gives parents control over what personal information websites may collect from children under 13. The amendments are aimed at keeping pace with new technology and devices that give children Internet access.
  • Connecticut Attorney General George Jepsen announced the creation of a task force to investigate Internet privacy and data breaches while educating the public and businesses about data protection.
  • On Thursday, the House subcommittee on Commerce, Manufacturing, and Trade held the first of a series of hearings to address online privacy. The hearing examined the European Union’s privacy and data collection regulations and how they have affected the Internet economy. Some have expressed concern that limiting the tracking of Internet users (as is done in the EU) could dramatically hurt online marketing effectiveness.

Federal online privacy concerns and the increased government involvement in online data security may be warranted, at least according to a new PricewaterhouseCoopers survey of 9,600 security executives. The survey found that 43% of global companies think they have an effective information security strategy in place and are proactively executing their plans. However, only 16% of respondents say their organizations are prepared and have security policies that are able to confront an advanced persistent threat attack, creating more online data security concerns.

It appears that most people with a stake in the game are at least aware of the severity of online security threats. Perhaps a combination of legal regulations and private efforts surrounding online data security could have the movement heading in the right direction.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: