It seems that product vendors, in ever more desperate efforts to introduce something “new,” think that some piece of software will help (help them, that is). The problem is, this software is often developed without good quality controls, and probably without testing of any sort other than “it works!”
Once this product is handed off to suppliers and retail merchants, it is “untrackable” in most, if not all, respects. So a recall gives me little comfort. Especially since the sales for the Bunny’s charger started in 2007.
“Energizer is currently working with both CERT and U.S. government officials to understand how the code was inserted in the software,” Energizer said in a statement.
An additional question might be, “What are your quality controls for software that is issued with your products?”
It’s not that manufacturers are unaware of this issue. In 2007, Seagate Technology admitted that an unknown number of its hard drives left an Asian manufacturing plant with Trojan horses. (Wonder where they are now?)
And, of course, Best Buy’s digital picture frame, sold during the Christmas 2007 holiday season (was 2007 the year for this, or what?) with software that added a Trojan. Although the company made claims that it was making efforts to contact customers, (how, exactly?) it never specified the type of Trojan, nor did much more than post an announcement on its website.
Perhaps enough reputation failure will persuade manufacturers to improve their Quality Assurance practices ( how about a little security in the software development??)
Meantime, I guess it’s best to keep an eye on any software that comes with a “product.”]]>
Turns out “Buzz” is a function that integrates social networking, instant messaging, blogging and any of the other applications within Google. Unfortunately, it does a whole lot more.
When I read that EPIC (Electronic Privacy Information Center) had filed a complaint with the FTC about “Buzz,” I was surprised. Then I read the complaint. I strongly recommend you read it, and you will see why a class action suit has also been filed today on behalf of the 37 million users of Gmail.
It turns out that regardless of whether a user clicked the button labeled “Sweet! Check out Buzz” or “Nah, go to my inbox,” Google Buzz was activated. No big deal? Ohhh yes it was. According to EPIC, and others:
Once Google Buzz is activated, the tool automatically populated my “following” lists using my most frequent email contacts. This happened automatically, after I logged in. Regardless of what I selected at the splash screen. In other words, if I didn’t change any of the default settings in Google Buzz, someone could go into my profile and see the people I email and chat with most.
Google Buzz did not warn me that creating a “Profile” in Buzz would make my frequent email contacts into “followers” and followed by,” and that this list would be made automatically available to those people and public on the web.
As we all know, web pages are archived and stored all the time. I can’t take that information “back.”
And neither can anyone else.
If you’re as horrified as I am, here’s a link to disable the thing. You start by clicking that tiny little colored icon at the top right of the mail page.
That splash page was NOT an opt-out. I had no choice about whether to start using it, or not.
Check it out before you disable it. See how much default information about you and who you contact is available. I’m furious. So are a lot of other people. Google made some changes last week, that do not go far enough:
“Google will stop auto-following the people you regularly email and chat with, but will instead suggest that you follow these people when you first start using Buzz. You’ll be shown a bunch of faces and check boxes to make sure you’re really interested in following these people.” Those “checkboxes” are still automatically turned on.
If you are using this BAD IDEA, you must go in and manually set privacy settings.
Not good news for privacy. Big bad news for Google.]]>
Their network contained at least 200 users spread out over multiple sites. I asked to speak with their network admin, and they said, “Oh, that’s Sally over in clerical. She’s part-time. We have a local company that comes in when we have any problems with hardware. They monitor our firewall, too. It’s too expensive to have a full-time person.”
Rule of Thumb (I forget how many I’ve got now): More than 50 users? Rent or hire a full-time administrator. (Not from the clerical department either). It’s not fair to the part-time employee, and no one is there to monitor what the IT company is or isn’t doing.
I asked for a network diagram, and they said, “Oh, we don’t have one. Do you really need it?”
Rule of Thumb number something-or-other: If you don’t have a network diagram, you don’t have your network. The hackers do.
I suggested some free tools for acquiring a network diagram, such as Spiceworks, which is nice for monitoring (you have to put up with some ads, but you can get rid of them for a fee) and Look-at-LAN, available for free at CNET, along with other free tools. They said they’d ask the IT company to do it.
At that point, I thought I ought to look at their server room. It was a good sign that they had one, and the door even locked. I went in, looked around. The part-time clerical person said that they had just moved from an older building and the IT company had moved their computers and server room/data center/storage closet over to the new building. It really was a nice room. No temperature monitoring, no fire alarm, and overhead water sprinklers, but a nice new room.
After looking at the equipment for a few minutes, I said, “So, where’s the firewall?” She didn’t know what firewall looked like (bad sign). She called up the IT company, who said it was in the building, because they were getting reports from it.
At that point, I had a brainstorm. I asked, “Which building, exactly?”]]>
So I thought I’d offer up, in the spirit of the season, my two cents:
Under the Category of Bad Idea, we have:
1. Yahoo, Bing and Google are racing to integrate Twitter, Facebook and other social media to include up-to-the-minute postings from popular social networks atop search results.
Why, exactly, is this a good idea? When your teenager posts something numb on Facebook, will it will now appear in multiple search results?
2. “Cloud Computing.” Still has yet to prove itself secure, audit-able or a real cost savings in the long run. Losing real control of your data is going to be expensive.
3. Outsourcing overseas. Yes it’s cheaper, and so are the security measures. The laws are different, and will you travel to India to prosecute? This is what happens when the bottom line ignores common sense. See “cloud computing.”
For the Category of Internet Fraud:
1. Social Networks have become an increasingly rich mine of personal activity that can lead to malware and theft of personal information. Including, now the business networks, such as LinkedIn. Don’t personally know who invites you? Now what do you do with the people you accepted? “Unfriend” them?
2. Peer-to-peer part 1 – Pretty soon (if they haven’t already) they’ll figure out how to encode malware into audio and movie files. Watch a movie, get a Trojan!
3. Sql Injection – Is only getting worse, and it’s one of the few things we could fix.
In the Category of “We Knew This, Didn’t We???”
1. Peer-to-peer part 2 – Those networks are loaded with malware. Are your kids on one? Or two? Do they bring their laptops home from college loaded with them? Best hope they don’t do any banking or personal business on those machines. Wait, they’re kids! Kids think they’re invincible. Uh oh.
2. Millions of websites are unsecured and allowing i-frame malware and other code to run so that they can install Trojans, etc. We’re still surfing, and infection is rising. Solution, anyone? Other than having two computers?
3.The bad guys have already figured out banking’s “Is it your picture?!” attempt at cheap two-factor authentication. Get ready to have a keyring full of tokens – I have two already!
3. Leave your debit cards at home – how long do you want to spend hassling with the bank to get your money back?
4. Haven’t you encrypted all your laptops, yet?
And last, but not least, the category of “Bad Uses of Good Technology:”
1. People that break into cars and steal your GPS can use it to track back to your house for burglary purposes. Snopes says this is partially true. I suspect car burglars are not that bright, but, who knows? Especially if I am not bright enough to put my GPS away.
If they get your car registration and your garage opener, you’ll be much more vulnerable. They’ll just use the GPS for easy driving to your house.
2. ATMs continue to siphon enormous amounts of money from banks, businesses, payment card processors, etc. No end in sight. Who will pay for it, ultimately?
3. “Cloud computing” can be used to speed up decryption across multiple CPUs. A bad use of Bad Technology! Double winner!
Ho, ho, ho. Have a great holiday, get lots of presents, and try to think of it as job security. That’s what I’ll be doing.]]>
She also said that their IT department was very much against the idea, and she wanted some information to reassure them. Let’s hear it for the IT department!
Starting from today’s post on HelpSecurity.net describing social media as a “playground for cybercriminals,” a quick Google search will give you 16 million or so sites that are considering the issues (or trying to sell you something, as usual).
It seems that businesses have a common mis-perception about social media (it IS easier than saying Twitter, LinkedIn, Facebook, Friending and MySpace, but I really don’t like the phrase “social media.” It’s just a little too “marketing…”)
Business doesn’t yet understand that “attention” does not translate into “interest.” Social media is very transitory, and attention shifts constantly to the next new thing. I don’t really want to hear what a business is thinking four times or so a day. (Does a business think?) I’m not sure, actually, that writing a blog, as many businesses do, is a fab idea, either. People write blogs, not company presidents. But that’s just me.
The other issue, at least on Twitter, is trying to build up the “fan” base. Companies are pushing their employees to become “fans,” but that means that the company can see the Twitter profiles of their employees. This has already resulted in company policy changes for employees, telling them to behave themselves on Twitter (or other places). This turns an employee fun toy into a business process, and nobody I’ve talked to that is on Twitter likes it, not at all.]]>
Reading the article, I was struck by the fact that nowhere in the article was the name of the third-party vendor mentioned. MassMutual is taking it on the chin (and quite defensively, I might add) because, ultimately it is their data. They picked out the third-party vendor – I wonder how good their contract with the vendor is.
And the parties affected by this breach? Their employees, and their families.
The company announcement: “The vendor engaged a highly respected forensics team to investigate, and at this time we believe that no misuse of the information or fraudulent activity involving the data has occurred,” is disingenuous at best. We looked, but found nothing right now – so everything is OK!
Here’s the reality, however:
According to a recent report published by Javelin Research, (for which you must pay $1250.00, so you won’t be seeing me offer THAT as a download) individuals whose personal information has been compromised in a corporate breach are four times more likely to suffer identity theft or fraud.
This result runs contrary to MassMutual’s defensive statement, and is very commonly used from breached companies, who often state that they have no indication that the compromised data has been used by criminals.
No vendor name, no information on how or when it happened, but trust us, your data is fine!]]>
Lo and behold, I am visited and left a comment by “Adam Wood” defending SMC, and telling me/us what a wonderful job SMC is doing about this issue.
(That’s got to be a really crappy job for a lowly PR flack; surfing the Internet for comments on the SMC modem, and uploading a canned positive comment wherever he can.)
Despite “Mr. Wood’s” comments about how SMC is fixing the problem in an absolutely wonderful way, I admit to some slight cynicism. Especially after reading more from David Chen, the guy who found it in the first place.
According to Mr. Chen, Time-Warner claimed to have pushed out a “temporary fix.” But here is his latest conclusion:
It seems that a fix from Time-Warner or SMC seems to consist almost entirely of PR.]]>
Security is a corporate posture, not a pass/fail compliance test. You can pass the test and the next day change settings on the firewall that turn it into a router. Is the QSA still responsible? Nope. We don’t really know all the details of what happened at Heartland. But we do know that being compliant does not equal being secure. Never has, never will.
For a well written post excising this “Finger,” check out this article on CSO, written by Ben Rothke and Anton Chuvakin. Let’s just say that blaming the door lock when you’ve left the windows open is not a viable public relations option.
The corporate security posture should provide a mandate, from the top down, of the company’s position on information security. The power of C-level executives enforcing the mandate has to come into play. Otherwise it’s just window dressing – and open windows are no way to manage the security of your environment.
What IS the corporate policy? How effective is it? Is management promoting AND funding it? Policies that are effective also protect the information of employees. Everybody wins, even, long term, the stockholders.]]>
Now, here’s the real story: If you read the article, the guy did not “hack in.” He used his VPN connection from his home (Clueless Number 1) to go into his employer’s network and access computers to mess up some programming.
His VPN connection had obviously not been disabled (Clueless Number 2) by his employer.
The police (Clueless Number 3) referred to him as a “computer whiz” for using his VPN connection from his home to get into his employer’s network.
Whiz? Cheese Whiz, maybe?]]>
Here’s a couple of my favorites:
Big Heads Maxim: The farther up the chain of command a (non-security) manager can be found, the more likely he or she thinks that (1) they understand security and (2) security is easy.
Plug into the Formula Maxim: Engineers don’t understand security. They tend to work in solution space, not problem space. They rely on conventional designs and focus on a good experience for the user and manufacturer, rather than a bad experience for the bad guy. They view nature as the adversary, not people, and instinctively think about systems failing stochastically, rather than due to deliberate, intelligent, malicious intent.
I would add “Software Programmers” to this one.
We’ll Worry About it Later Maxim: Effective security is difficult enough when you design it in from first principles. It almost never works to retrofit it in, or to slap security on at the last minute, especially onto inventory technology.
Head on over and check out the rest.]]>