In the days since that conversation I’ve put some thought into the frameworks because in the end the aforementioned CISO was committed to finding the NIST experience and eventually did. But what did that really mean? Having fairly recently had the occasion to have both NIST 800-53 and the ISO 27000 documents in front of me it was striking how similar they both were with only a few obvious distinctions to be made between the two. Essentially the differences reflected more on the cultures that created them than the risk factors they were focused on (NIST = U.S.A and ISO = European). But information technology architectures fundamentally are identical the world over so despite formatting and spelling they both are addressing the same challenges whether or not they realise it. And for those of us who have familiarity with both, to know one is to know both, even if those who are committed to either one disagree. If you’ve worked on audit/assessment projects leveraging ISO 2700o material you’re immediately qualified to work on projects using the corresponding NIST framework and vice versa. And if you have experience working with PCI standards guess what? You can pretty much step in and work with either NIST or ISO content (except of course you have to expand your sights to include the entire infrastructure, not just on whatever touches PAN data).
My preference is that we would consolidate globally into the ISO frameworks where applicable and maybe even fit that in to the SSAE 16 process. I’ve read enough toothless SAS 70/SSAE 16 reports to know that it’s easy enough to rig the system to your advantage. And unless you’re a government agency that has to comply with NIST there’s little meaningful value to using NIST whereas being ISO 27000 certified carries a great deal of weight within the audit/assurance community. Plus there’s the added benefit of having InfoSec practitioners all getting trained and practiced at both building out ISO 27000 compliant solutions and also knowing how to test the related controls. Think about that, a single global security standard regardless of where you enter into the profession. Having run a few practices in my career and way more than my fair share of engagements I can tell you that has great appeal. Plus it would help eliminate awkward dialogues where my sixteen years of real and relevant experience is at least partially marginalized because it hasn’t all been with one particular standard.
Ultimately in the end a frameworks only meaningful advantage is that it theoretically ensures consistency in how controls are identified and assessed. If you have someone who knows a framework but doesn’t really understand the details within that sort of defeats the process anyway, no matter how robust or thorough it may be. Perhaps that’s why I consider it a non-issue when it comes to which frameworks a practitioner has used. I’d much rather work with someone who understands the technology and has a good feel for the details rather than someone who knows that SDLC is addressed in SA-3 for NIST or Section 12.5 for ISO 27002. But than again, I’ve always been more concerned with real risk, not perceived risk so this shouldn’t be surprising to anyone who’s read my content in the past.
A security framework by any other name would be just as comprehensive, you know what I mean?]]>
Truth be told, while I’ve spent somewhere near seventy-five percent of my time over the past ten years working for financial institutions I’ve also done a fair amount of work for insurance companies, mostly centered on SOX with occasional diversions into general risk assessment work. The drivers in the insurance industry are different in terms of oversight and requirements and so the volume of work isn’t nearly the same. But that by itself begs a question: Why isn’t the insurance industry as regulated as financial institutions?
I’ve now done major audit and assurance work for financial institutions, insurance companies and health care providers and for most of them the risk profile is almost identical in terms of non-public personal information. So why isn’t the level of scrutiny equal across all three of them? While some might start spouting about how it is, about how states routinely audit insurance companies and how the health care industry has to comply with HIPAA the truth is that banks and credit unions are held to a much higher degree of accountability than any other vertical. Why is that?
I’m fond of routinely, almost incessantly beating the drum about how it’s all about the risk. I get my initial client opportunities because I have a deep resume with relevant experience but I generate repeat business because I tend to whittle things down to what matters most both to my clients and to their oversight providers (auditors and examiners alike). Compliance exists because risks need to be addressed – if the risks aren’t credible or likely the work should be adjusted to reflect that. But where the risks are real they’re really real. The type of data shared with an insurance company is in many ways even more sensitive than anything shared with a bank and most of what’s shared with insurance companies is also shared with health care providers. Yet there’s no true Federal oversight for the insurance industry and HIPAA is about as much of a toothless tiger as anything I’ve ever encountered.
I recently completed a boatload of documentation to get my family on a new health insurance plan. I turned over every piece of sensitive information I have for every member of my family minus my bank account information because that’s what was required. I had to provide all of this online and follow that up by sending them an impressive array of hard-copy documents with even more sensitive information that should never be kicking around in the public domain. In the past I’ve also been required to provide my bank account information because one plan in particular would only provide coverage if they could automatically deduct monthly premiums via ACH drafts. So now the insurance industry has access to it all; name, address, social security number, date-of-birth, maiden name, medical history and banking information. And yet there’s no true oversight agency that’s responsible for making sure they’re protecting all of MY information.
To compound my frustration, of the four insurance companies I’ve conducted work for since 2006 (two of which are Fortune 5oo’s) exactly none of them have something akin to a Chief Information Security Officer. They all have risk people focused on the business side of things (because that’s necessary to protect profitability) but that’s it. There’s typically an information security manager who’s part of the infrastructure team but who almost never reports right into the senior-most technology person (e.g. CIO, CTO). Any audit work that occurs is coordinated across multiple IT managers and on rare occasions there will be an audit/assurance manager. However in the one example I personally know of where that position exists the person in the role was really just a converted IT manager who obtained a CISA designation – no fundamental audit or assessment experience.
The question has to be asked: Why is it that banks and credit unions are heavily regulated regarding protection of non-public personal information but other industries with similar risk profiles are not? Why aren’t insurance companies required to comply with FFIEC-type guidance? Why isn’t there a Federal regulatory agency that is responsible for keeping an eye on the insurance industry the way the FDIC, OCC, FRB and NCUA do so for their financial institutions? And trust me, whatever oversight exists for the insurance and health care industry is largely ineffective. Why is my sensitive information considered more at risk within a banking infrastructure than it is within an insurance infrastructure? Having been on site for both and examined their internal controls I can’t answer that question, that’s for certain.]]>
Which is why I don’t much care for any manner of compliance-based assessments that are self-administered.
Companies have had this crazy notion for more than a decade now that the best way to identify and address risks inherent within the infrastructure is to ask key stakeholders a somewhat generic set of questions and use their responses to figure out what’s what. Most of the time the people driving these initiatives are either information security professionals or corporate compliance people who either believe they already know where the problems are or are looking for the simplest and easiest way to satisfy some requirement. But what they often fail to grasp is that it’s almost impossible to draft a common set of questions that either apply to the vast majority or worse, will be interpreted consistently across the stakeholder population. Plus the perceived benefit of using a self-assessment approach to reduce effort and required support resources is almost always an illusion. Most of the time saved in not having someone ask the questions and record the answers is instead consumed by needing to explain the format, explain the questions or trying to clarify and clean up the responses. While supporting one such program recently each assessment required a kick-off meeting, a follow-up meeting to review the status of the assessment, a third meeting to review the initial draft of the questionnaire, a fourth meeting to review the resulting report(s) and a largely untracked number of hours to help generate all of the related support documentation. Regardless of the size of the entity being assessed each one consumed somewhere close to eight hours. While that might seem like a scary large number, the really scary part was that based on which risk analyst was responsible for the assessment and the personality/mindset of the stakeholder completing it the results looked very different from one another. It was almost impossible to generate meaningful metrics across the assessment population because a “Yes” answer for one question might mean the same as an “N/A” in another; there was no way to know that.
Another issue I’ve always had with the self-assessment approach is that while some stakeholders take it seriously and do a remarkably thorough job, others race through it with little hesitation just to fill in the blanks and get it off their desk. Sometimes you can detect which is which, sometimes you can’t. Plus the approach fails to capture much of the rich and relevant information related to each question and the underlying risk behind it. I recall conducting a team-driven risk assessment years ago where one stakeholder after the next covering a very broad sampling of the infrastructure kept lamenting on the lack of a proper disaster recovery plan. They had something to show auditors/examiners but to a person no one believed it was a truly viable plan. All but the CIO brought it up as a concern and when pressed a bit about why that was they all shared a common concern: If their main office was closed unexpectedly for twenty-four hours, regardless of the reason, they were likely out of business. A related self-assessment question would ask “Do you have a current and recently tested DR plan?” – most respondents on that engagement would simply have selected “Yes” and moved on to the next question without ever being challenged to share their concerns. Where’s the value in having a repository of questions and answers when it fails to capture the true essence or dimension of risk?
And the biggest issue I’ve always had with self-assessment questionnaires and their related templates is that they’re so often poorly designed. I can guarantee you that each of them has at least one question which makes zero sense to anyone who reads it. They either answer it based on what they think it’s asking, answer with an “N/A” or require follow-up with the people managing the process to have it explained. And you’d be amazed how many times even the author is challenged to provide a meaningful answer (including this guy). One thing’s for certain, a self-anything needs to be designed and written so that everyone understands what they need to do without having their hand held. Plus it’s rare that questionnaires are customized so that each stakeholder is only asked those questions that truly make sense. An application owner should never be asked if their anti-virus solution is current and up-to-date. A business process owner should never be asked about software change management. Yet seldom have I encountered a self-assessment process which does anything like this and so the audience is burdened with time consuming yet unnecessary questions.
Really though in the end my overriding problem with the self-assessment approach is that it fails to capture the expertise and guiding hand of true risk and assurance people. The process is often supported by analysts who don’t really have a feel for conducting assessments and are satisfied that all of the blanks are filled in. I have a nose for when there’s something beyond a simple answer and know when to scratch at the surface to bring it to light. By not allowing expert hands to guide the process potentially huge amounts of valuable and possibly critical details are being missed thus undermining any perceived value of the process. When you consider that all tolled and tallied the self-assessment approach versus the guided assessment approach doesn’t really save you much time (if any) and that it results in a weaker finished product, why would you elect to use it? One answer is that regulators push for it because perhaps it’s better than nothing (I can’t get any of those I know to comment). Another is that the people sponsoring these initiatives lack the fundamental comprehension to understand their options and chose what they perceive as the less complicated approach (again, I don’t know for sure it’s just a theory). What I do know is that when done right a risk assessment is managements best friend, a fundamental belief behind the recent spike in ERM activity.
While recently having my car serviced the mechanic discovered a nest of some sort in the engine block, he thinks it was probably squirrels. Because of this discovery he went searching for all the wired connections to make sure they weren’t chewed up and destroyed, quite a few were as it turns out (the car had been idle for several months). The bill only added the cost of the replacement wires but nothing significant for the time it took to first find which were affected and then replace them. Had I attempted the repair myself I might have noticed the nest and likely would’ve cleared it but know for certain I never would’ve thought to check the wires, where to look for them or what to look for. I was smart enough to rely on a professional with a nose for that sort of thing and it saved me time, money and best of all the aggravation of having the car break down somewhere unexpectedly. Good thing I didn’t go the self-repair route.]]>
Our family has had a PayPal account almost since PayPal has offered them. It’s remarkably convenient, it provides us great flexibility to shop online using a single payment source and I love that we’ve been able to change funding sources several times over the years. It’s always conveyed a certain sense of security; I’ve just always felt safe using PayPal. I’ve even gone so far as to suggest that at some point, if PayPal management grows things just right I could see a future state where paper currency and maybe even actual physical credit cards go away and are replaced by some version of their services. When I discovered this past year that Home Depot already allows you to use PayPal to make in-store purchases I was convinced I was right. Now I’m not so sure.
Over the past year or so I’ve been getting the occasional email ping from PayPal regarding our reaching a spending limit. It’s a fairly high limit for most but considering that we’ve been using PayPal to make purchases going back nearly a decade maybe not as much. But the message has been quite clear; if we didn’t verify our account before reaching this limit it would be “ the maximum amount of money you can send or use for purchases before you need to become Verified”. So how you become verified is quite simple – either give up your bank account information or apply for a privately owned credit card. No, seriously, those are the only two options.
My first thought was that although I liked having the protective layer of a credit card product buffering my PayPal account from my actual money I was okay with providing bank account information. It’s not like I don’t use that in other places to make payments and so there wouldn’t be any enhanced risk by doing so again. I wasn’t going to apply for a PayPal-based credit card because I don’t want one or need one and I wasn’t looking for a new credit source anyway, I just wanted to continue using PayPal. I clicked on the option to provide my bank account information and after the initial screen where they ask for the routing and account details and clicking on “Submit” I was presented with a screen that I still can’t believe exists. Right there before my eyes was a screen from PayPal in which they ask me to provide my online banking user-id and password so they can verify a series of PayPal generated payments thus confirming my banking details. Let me repeat that one more time; PayPal asked me to provide them with my online banking user-id and password.
Has PayPal lost its collective mind? Seriously, have they?
I was stunned, almost to the point where I couldn’t get coherent words to flow. I immediately fired off an email to PayPal customer support asking them how they could do something so outrageous. Within minutes I received an automatically generated reply which I always find insulting, as if though I’m not worth an actual intelligent and personal response. It was a complete regurgitation of everything stated on their website and completely ignored the gist of my email. I fired off a second email missive, this time way more specific. Here’s what I wrote:
Here’s the sequence of events:
Wednesday morning I received an email alert from a company I use that my automatic monthly payment was declined. Knowing full well it wasn’t a balance issue I assumed correctly that my bank had cancelled the card. As I travel extensively and rely on the card exclusively I made my way to a local branch later that morning. Along the way I called into the service center and confirmed my suspicions, that Visa informed the bank that my card was part of a range of numbers that was possibly exposed via a breach. I asked if it was possible to learn the name of the offending vendor and was told (same as last time) that Visa doesn’t share that information. As I am now a two-time victim it’s easy to spot the trend and hard to ignore the possibility that it might have involved the same vendor both times. It wound up taking three visits to a branch to straighten me out and actually get a functioning card in my wallet. The inconvenience is more than benign as I use the card in several places and will now need to make manual, one-off payments with the temporary card while awaiting the permanent card so that I can update the affected accounts. By the time this is all said and done it will have resulted in my exhausting more than a half day of billable time trying to fix a problem I didn’t create.
A few things need to change.
We collectively as an industry and a society need to accept that both identity and card theft is a mainstream occurrence and adjust accordingly. Legislation is needed to further insulate the victims (like me) from any extended damage or inconvenience and ensure as smooth a process as possible to allow us to continue living our lives. Because right now I don’t just feel like a victim, I feel like I’m being punished for being one and treated like I simply don’t matter.
Hey Washington, make the industry tell us what’s going on and to treat the consumers better!
Oh, and PCI Security Standards Council, how’s that framework working out for you? I’m thinking the only one benefiting from your content are the practitioners making money by supporting it.
Seriously, something needs to change.]]>
Of course the truth is much more complicated. I don’t just focus on computers, my scope expands to include anything that involves sensitive information. While that always includes a variety of devices it also includes paper-based and people processes as well. I frequently share stories about the enormous amount of printed content that’s to be found throughout an institutions physical locations. I occasionally tell stories about how careless people can be when on the phone or in conversation and sharing all manner of sensitive information. It’s never just about computers, it is however always about information and how it needs to be protected.
Truthfully though what I really do is search for controls that protect information, identify those that I find and try and measure their effectiveness and more importantly identify where controls are missing and work with my clients to remedy that. At the heart of the regulatory requirements I focus on it’s all about the risk introduced by the presence of information, from personally identifiable (PII) to non-public personally identifiable (NPPI). Risk: It’s what drives every single project I work on, it’s what drives every product and process I help develop. And really, if you take the time to read through the literature, it’s what’s behind just about every piece of regulation known to the banking world. Risk, risk, risk and risk.
One of the reasons I’ve enjoyed spending so much time working with the community banking and credit union sector over the past few years is that it’s a simple enough argument to make with fewer people to convince; everything you do to comply with the regulations should be risk-based. It doesn’t really make a difference if it’s complicated to do or time consuming, you prioritize based on where they are found and make decisions accordingly. But that gets much more difficult to do as the institutions grow in size and complexity. Over the fifteen years I’ve been building and supporting compliance initiatives I’ve worked with Fortune 50′s, 100′s and 500′s and a whole lot of financial institutions that merely read Fortune magazine. But while their overall size varies widely risk is still risk and that never changes.
I wish more practitioners embraced this simple concept. While some do, many still don’t. There’s often a rush to come up with a standard set of decision criteria to drive the work based on factors not necessarily aligned with risk factors. Those who have worked with or for me will tell you that when presented with questions about which vendors or applications to assess or what to look for when conducting any type of assessment my first line of logic is to try and figure out where the greatest possible exposures to be found. Assessing a low risk application yields little value no matter how complete it may be. And reviewing a vendor where the dollar spend is high but the risk factors are low does little to protect the institution.
Beware the practitioner who wields a hammer for they only know to look for nails.
Your regulator doesn’t want you to blindly implement compliance programs, they want you to identify and manage risks, real risks. They want to be able to understand the logic and approach being used and find credible evidence that you’re focusing your efforts on the right things. Go back and read through the library of FFIEC documentation and pay close attention to the hooks inserted throughout where they talk about conducting assessments and talk about using approaches which are appropriate for the size and complexity of your institution. Then scan through your related program inventory and figure out if you’ve designed things accordingly. Are they actually protecting your institution from credible threats and risks or are they just filling binders on your compliance officers shelves?
For me, professionally I’d prefer to always only do meaningful work and in the audit and assurance world meaningful is code for risk-based.
I’ve posted before about such things: about how you need to exercise good judgment when online and when sharing potentially sensitive information (avoid those Facebook “about me” quizzes always). While something like the Facebook breach might make it a little easier for the bad guys, the truth is the sheer volume likely rendered the information useless. I couldn’t find a Social Security number, bank account number or anything else remotely resembling a true digital prize. And I looked, believe me, I looked. I should qualify what that means; I have a well-earned reputation for being able to develop fairly extensive dossiers on people by using a variety of techniques, all based upon readily accessible online resources. It’s sort of a hobby interest of mine and I find new and better ways all the time to improve my techniques. But other than using the Facebook skimmed data for marketing activities, I wouldn’t think it to be too big of a deal.
However, if you’re looking for a really neat way to access social network sites in such a way that you get to work smarter, not harder, when up to no good there are far more effective methods available. My newest favorite threat to all of our privacy and sensitive information is a recent add-on to Outlook that allows me to instantly access Facebook and LinkedIn information directly connected to an email account. The way it works is that you send me an email, the Outlook add-on then scans Facebook and LinkedIn for activity linked to that email account and displays it all nice and neat in a sub-window below the message. I installed the add-on on Wednesday out of curiosity, expecting little if anything useful. The first email I receive after the fact was from an associate in the banking industry. This person must use a business email for Facebook and LinkedIn because the aforementioned sub-window filled quickly with nearly a dozen different bits of information between Facebook and LinkedIn. I can view family photos, a scheduled event detailing an upcoming vacation and several LinkedIn updates including new connections. That by itself is scary enough but what makes it worse for me is that I’m not connected to this person on either site. I was able to see all of this information without even wanting to. In one neat little bundle, I have the person’s email address, access to personal information, a clear indication of when they plan to be away from the office, and a simple way to track the individual’s whereabouts. Oddly enough, if I searched either site directly I couldn’t see much of the same information, but the Microsoft utility apparently removes such obstacles and gets me to where I want to be.
What would you rather have: A monstrous database with relatively benign Facebook user information or an email containing all forms of PII combined with the person’s title and position at a bank or credit union? I know who they are and if they are likely to have broad access capabilities within their institution — information allowing me to reset passwords and close to no possible way to trace this all back to me.
As if though this isn’t enough to cause all you security-minded folks to lose sleep, there’s one more new wrinkle to worry about. Facebook now has its new “Places” functionality working, in which mobile users can indicate where they are at a given point in time. It reminded me of the Trip-it utility that people started using on LinkedIn last year. Essentially, both tools allow you to provide specific information to everyone you’re connected to and many of the people they’re connected to, letting them know when you’re out of the office or away from home. Think about it: You go to the beach for the day and update your location on Facebook. You’re thinking that it’s no big deal if your friends and family know where you are and you may be right. But on the day I tried it out, I tagged a family member who was with me. He has nearly 600 Facebook friends, of which he knows less than a third. So 400 relative strangers knew that not only was he away from home but so was his family. Any one of those connections instantly knew there was a reasonable chance that if they broke into our house they could get in and out with little chance of detection. For a society where people have their mail collected daily and their newspaper service suspended when away on vacations to avoid the appearance that the house is empty, this is a stunning turn of events. And you can’t stop the kids from using the newest and latest capabilities, so now we have potentially tens of millions of people advertising when they’re away from home and for how long.
It’s amazing, really, how we react to a threat framed for us by the media but almost completely miss out on another that’s way more likely to hurt us. The first thing I would do as a CISO would be to have a script written that checked every corporate email account against all popular social network sites to see if anyone is showing up. The second thing I would do (and already advise clients to do) is to update all of my related policies and training curriculum to address mixing business with pleasure: Never use your corporate email, never advertise travel plans, and never disclose anything even remotely resembling sensitive data on any of the social networking sites. And I would incorporate activities that check to see if these new policies are being followed. Remember, the right way to manage this new evolutionary twist in technology isn’t to prevent it but rather to manage it appropriately.
Oh and just in case anyone needs to be reminded of the fundamental rule of security, make sure out-of-office replies are restricted to internal communications only. I can’t believe how many of them I still receive, and with this new Outlook capability it’s just a recipe for disaster.]]>
But lately I’ve been wondering if it’s even the criminal element that presents the greatest threat to my PII. I worry that the banks themselves may be slipping just a bit in keeping up with their regulatory obligations regarding my privacy based on news from the field.
Our practice routinely calls on financial institutions with our services. We’ve spent an enormous amount of time and energy paring things down to what we believe are the most relevant areas based on guidance from the oversight agencies and from practical experience. And so when we engage a current or prospective client in dialog we’re typically cutting right to the chase in order to make the most efficient use of their time. We’ll hear a wide range of responses when asked how they’re managing a variety of key control activities (e.g. it’s managed internally, we use a software solution, our audit department does that, etc.) and for the most part it rings true. However lately we’re being greeted with a noticeable uptick in one response in particular: “The examiners didn’t even look at that so we’re not worrying about it right now.”
Not to belabor the point but as I’ve already mentioned we’re not offering exotic services. Quite literally everything we have to offer to our clients should make the short list of must-haves for any CISO or compliance officer. How can the examiners not cover any of these things?
To be fair, it’s typically not a reflection on ability but rather available hours. I’ve blogged before that when things are missed it’s almost always been because the fieldwork only allows for so many hours and you start with the riskiest areas first and work your way down from there. So if the examiner needs 80 hours to cover the landscape but only has 40 hours to get it done they have to focus where they think they most need to. But still, how do you not make sure that there’s a current business continuity plan in place or check to make sure that the infrastructure has been tested recently to ensure there aren’t significant vulnerabilities present? Internally we’re very kind to the entire examination process over the past year or so because safety and soundness has really needed to be at the forefront of the regulatory efforts. So we balance our concern about what’s being overlooked with an understanding that the examiners are likely doing the very best with what they have to work with. But still…..
I was reminded recently that the FDIC budget for 2009 included an increase in the number of examiners available by 30%. At the time it was announced, I figured it was a move intended to ensure that compliance was being properly enforced across all areas during a very turbulent period in our banking history. However nearly two years later I wonder what’s happened? How can I reconcile an increase in the number of examiners with an apparent decrease in information security oversight?
If you think I’m exaggerating consider that over the past decade the FDIC has released three or more Financial Institution Letters (FIL’s) addressing information technology guidance every year right up until mid-2009. Since then there have been no updates at all relating to IT or information security. After never going more than a few months offering updated guidance over a 10-year period, they’ve had nothing new to publish in 14 months. How is that even possible?
On one hand, I’m hearing that examiners aren’t always looking at key compliance activities and on the other hand, I’m seeing an apparent drop off in IT guidance from the chief banking oversight body. For someone like me who worries about these things on both a personal and professional level, this is not good. When I watch that IE8 commercial I’m not laughing; I’m wondering how anyone would even know if that sort of thing was going on right now for real?]]>
The next day I received another message from him with a different link, thus confirming my earlier suspicions that something was amiss. After letting him know about the wayward messages, I started thinking about what had just happened. This is someone who lives security every minute of every day. He knows about every threat old and new, the tools and techniques to combat them and is one of those people I go to for advice when I don’t know where else to turn. And his Facebook session was sending out phantom messages without his prior knowledge. A little scary when you get right down to it.
But wait, it gets just a bit scarier for me.
Fresh on the heels of the Facebook incident, I came across an interview on a security website I visit now and again in which the interviewee offered his opinion that security threats from social media sites are greatly exaggerated. Really? Based on what? Here I am having just been presented with evidence that the threats are real, swift and plentiful and I’m being told just days later that it’s really not that bad. And why I’m writing about it here is because although the person being interviewed is not offered as a security expert, the website itself conveys a certain degree of legitimacy. The opinion was followed up by a recommendation that if you’re concerned about the threats imbued in the use of these sites that you should simply not use them. Hmmm. My takeaway from the interview boils downs to “security threats from social networks sites are not so bad” and “if you’re concerned about threats, don’t use them.” So your choices are either ignorance or avoidance; nice.
I remember way back when Palm Pilots first became popular. Corporate IT reacted by banning them, claiming it would be a support nightmare. Not long afterward, the use of personal email became pervasive and people wanted to be able to access it from their work place. Corporate IT reacted by blocking access to most common external email sites. A short while later, USB storage devices started showing up and almost a minute later corporate IT reacted by, you guessed it, banning them. Fast forward to 2010 and smart phones (the modern day equivalent of the Palm Pilot) are common place within corporate infrastructures, USB devices are allowed, and the demand for access to external emails has subsided quite a bit (thanks to the aforementioned smart phones).
Now the greatest threat presented by the most recent wrinkle in the ongoing evolution of technology is access to social media sites. I keep reading articles and coming across polls exploring whether or not companies should allow access to Facebook and LinkedIn. I’m wondering why anyone seems to think it’s optional. Exactly which technological advance has corporate America successfully derailed since technology first landed on our desks 40 years ago?
Here’s my take on all of this:
I clicked on the link and instead of being directed to the desired page was instead routed through to a Websphere Administration panel.
But that’s not even the best part of the story.
After confirming that in fact I was somehow through their firewall security and at some point along the way into their infrastructure, I decided to be a good citizen and let them know. I tried calling their customer support department twice and both times, after being routed through some crazy series of automated menus, wound up being treated as someone who was simply having trouble accessing his online account. One customer support representative had no clue what I was describing to them and the other one seemed to grasp what I was saying conceptually but didn’t have a page in his playbook to manage the call and so he defaulted to trying to help me pay my bill.
The funny thing is that once I navigated from their homepage through to the payment page it worked just fine, but if I selected the bookmark it deposited me right back at Websphere Central. And as of 30 seconds ago it still does.
Now I know that bashing the local cable company is a popular thing to do and has fast become one of our nation’s favorite pastimes. But I’m not so much picking on them as I’m amazed that they have such an obvious flaw in their network security. My firm conducts basic penetration tests all the time and this is the sort of thing that would be flagged without much of an effort. Why haven’t they found it yet? And if I’ve found it entirely by chance what about the hackers who go hunting for these sort of things? Or have they discovered it and are currently feeding large while it remains available?
It’s amazing any of us are ever willing to conduct business online, when you get right down to it.]]>