If you read the vendor press releases and marketing slicks, you’d think that 802.11n was the bomb. It’s faster, it’s more powerful – it even has more antennas for goodness sake! Shouldn’t that mean something to the average techie? Maybe so, but I’m just not seeing it.
The 802.11n draft has been out, for what, three years now and we’re approaching the one-year anniversary of the “final” amendment. But where is 802.11n? I’ve yet to see any of my clients deploy it. I’ve yet to see it at any Wi-Fi hotspots – including large hotspot deployments such as airports. I’ve yet to see it when driving around town. It’s just not out there. Maybe it’s just me not looking hard enough.
Better yet, maybe 802.11n is the Windows 7 of networking: Not a lot of market penetration just yet, but if we wait and see – it’s coming? Given how the market works, perhaps once existing a/b/g equipment is replaced in the future, 802.11n will be the only viable alternative. Who knows?
I suspect some larger enterprises, universities and businesses with a heavy reliance on Wi-Fi are rolling out 802.11n and loving it. I’m just not seeing it. What about you?
Kevin Beaver is an independent information security consultant, keynote speaker, and expert witness with Principle Logic, LLC and a contributor to the IT Watch Blog.
There are many phases to creating a wireless network, from planning to deploying. But concerns for your network don’t end there; beyond initial set up and deployment is management and security. One of the big monsters in network security is the end user, so security and network management begin with securing and managing who has access to your network.
Determining the Placement of Your Network Access Control
When choosing a method for Network Access Control (NAC), consider the following:
1. Level of security:
- User identity management versus just the computer’s identity.
2. Network infrastructure versus endpoint-based approach (server software on appliance v. network switch):
- Network-based systems boast better centralized control, easily set enterprise standards, and NAC protection for remote users accessing the VPN.
3. Depth of network monitoring:
- For endpoint security: Check PC at login only or continuously monitor the whole time it’s on the network?
- Consider the lesser of two costs: NAC monitoring costs versus fix costs for malware or break-ins.
The most important part about crafting your NAC policy is Continued »
Here’s an interesting wireless startup: KeyWifi. The company’s slogan and apparent mission is “Unlocking hotspots near you”. It’s actually a neat idea. It puts accessible and underutilized hotspots to good use and helps the world by creating “positive fiscal, social and environmental results”. The premise of their business model is join their system and supply hotspot access and/or join their system and rent hotspot access … all for a fee. If you’re a supplier and can get a few users on board your system, suddenly it’s paying for your Internet access. With users, it offers a way to get online without having to pay full price for broadband. Pretty cool.
I won’t be a supplier or a user because I know the bad things that certain – often “trusted” – people can do when using your Internet connection or examining your wireless network traffic. But for those who live more “openly” or are absolutely certain that their computers and communication sessions are locked down, I could see KeyWiFi working – especially where there’s not a Starbuck’s or McDonald’s around offering free Wi-Fi.
Certainly worth keeping an eye on.
Kevin Beaver is an independent information security consultant, keynote speaker, and expert witness with Principle Logic, LLC and a contributor to the IT Watch Blog.
While you’re scrolling through Twitter on your Wi-Fi network, have you considered who the unsung heroes, fighting the good wireless fight everyday, are? No need to panic; we’ve got you covered. From wireless networking enthusiasts to journalists to high performance Wi-Fi vendors, we’ve got the Holy Grail of a Twitter feed (well, for wireless anyway).
Your Average Wireless Joe
@wifi_guy: He patrols Twitter for Wi-Fi news and is an active participant in #WirelessWednesday.
@sniffwifi: Ben Miller is an independent contractor performing all kinds of wireless-related work in the Los Angeles area.
@bionicrocky: A self-proclaimed geek, wireless guy and cryptonerd, Rocky Gregory tweets about all things wireless and then some.
@KeithRParsons: Keith Parsons founded Wireless LAN Professionals, a community for WLAN pros. He’s worked exclusively with WLAN for the past eight years and has access to myriad resources and contacts.
@joelbarrett: Joel Barrett works as a wireless network architect for Cisco. He also keeps a personal blog ranging in subjects including wireless networking.
@CWNP: The official Twitter of CWNP, Inc., the “IT industry standard for vendor neutral enterprise Wi-FI certification and training.” Get the latest in WiFi news and daily definitions.
@jameyk1stner: Jamey Kistner is a Certified Wireless Technology Specialist (CWTS) who tweets about wireless news and takes part in #WirelessWednesday.
@MarcusBurton: Marcus Burton writes about Wi-Fi and is a CWNP tech guy.
Check out vendor(ish) Tweeters after the jump. Continued »
Think you’ve made a safe storage decision by going with a trusted name? Think again. Today’s guest post is from Roger Kelley, aka @storage_wonk, who is the principal architect at Xiotech, blogger at StorageWonk.com and one of our featured storage Tweeters.
Of all the challenges faced by Information Technology (IT), purchasing SAN storage for the data center is one of the biggest. Cost, criticality, and complexity are the three C’s that all too often impact the fourth C: career. IT is quite used to making tactical purchases in the form of servers, routers, desktops, and the like. SAN storage acquisition is harder because it is inherently strategic in nature and therefore poses greater long-term risks/rewards for the company at large. It’s interesting that most companies tend to follow a similar path when the order comes down to evaluate storage for purchase. They form a committee that then draws up a list of weighted requirements for both hardware and software features and functions. They then invite several storage vendors to the party to offer their pitch and from that pool of information, the committee fills out a requirements list in an effort to evaluate each vendor in an unbiased and “left brain” sort of way.
The perplexing part comes in when, after all this effort is expended and the players are evaluated, IT decides to go with a large vendor purely because “Nobody was ever fired for buying _____!” Though they may think they’re playing it safe or smart, what it really indicates is how little confidence they have in their own ability to select a storage vendor based upon the technological merits of the vendor and the unique business drivers of their own organization. People who make storage decisions based upon fear are, in effect, saying, “When it fails I can always justify my acquisition to senior management by hiding behind the perceived reputation of the market leading vendor because that’s what everyone else buys.” This lemming-like behavior can prevent a company from benefiting from newer technologies that might significantly help their bottom line. Given the quality and innovation of smaller companies, this type of thinking is an unnecessary hindrance.
Truth is, there are excellent reasons to look beyond the so-called market leaders in storage and see what the smaller companies are up to. These smaller companies can’t outspend the big ones on marketing so they have to focus on out-innovating them in technology. This innovation has recently led to some amazing technological breakthroughs that the big boys simply can’t offer, such as self-healing arrays (not “fail in place” but truly “self-repairing”), unheard of real-world performance gains without expensive SSD (no, really!), hugely scalable architectures that actually scale, and unmatched levels of reliability.
The problem with the big guys of storage is that they get lax and stifle innovation in favor of playing it safe and keeping the revenue stream rolling along. The smaller players simply cannot afford to be lax, they have to innovate to stay alive and this bodes well for customers who are looking for “best of breed” options for their company. But to take advantage of these innovations company management has to empower their IT staff to make the right technology decisions for the company instead of putting them in a circumstance where they feel they have to make the safe decision for their own careers.
But some may ask, “Isn’t avoiding difficulties precisely why you should buy from the big boys?” Not really! First of all, hardware, from all vendors, can— and does—experience difficulties. No hardware manufacturer can provide 100% perfect operation, 100% of the time, for 100% of its customers. Glitches arise, hardware fails, configurations get mangled due to human error (both vendor and end-user). In short, life happens for everyone. Second, smaller vendors tend to be much more responsive and nimble when difficulties inevitably arise simply because your business means more to their bottom line than it does to the industry giants. If you need a resolution to a problem with a giant company, it can often take 6 months to get a “no”. “Yes” can take considerably longer.
In the end, whether or not your chosen storage vendor is successful in your environment is more about architectural design than brand name. True, a properly architected storage network tends to be more expensive—but then so is downtime! Buying storage based upon technological and business drivers, and spending a bit more to do the job right, are significantly better strategies for company and career than doing things half-way and hiding behind a name when things fail.
Here at ITKnowledgeExchange.com, one of the evergreen questions we see in the IT answers forum is the recent IT grad asking for advice on what to do next. With so many possible certifications and paths available, figuring out your course of action is a daunting task. Never fear: At ITKnowledgeExchange we have members with decades of experience to provide their two cents to those in need of IT career guidance. One of our newer members, ITstudentGrad, asked for suggestions of good entry level IT jobs. He’s received some great advice from members who were once in his shoes.
Tip 1: Exhaust your resources.
As our very own EmNichs points out, bloggers like Ed Tittel offer advice such as 7 Questions for Highly Effective Career or Certification Advice. There are myriad resources for jobseekers in all industries, and IT is no exception. Whether you’re searching for general career advice or a more complete picture of specific areas in IT such as business service management, cloud computing, or IT consulting, more and more IT professionals are jumping on the blogwagon to share their insights, predictions and experiences across all areas of IT.
Tip 2: Start at the beginning.
Shanekearney, a network engineer in Ireland, has a DIP in Computer Science, CCNA (Cisco Certified Network Associate), MCSA (Microsoft Certified Systems Administrator), MCP (Microsoft Certified Pro), A+, ECDL (European Computer Driving License) and is currently studying for MCSE (Microsoft Certified Systems Engineer).
His advice for ITstudentGrad is to get a taste of every flavor of IT: “You should start in service desk tech support as it deals with so many different areas and allows you to gain much needed real world work experience.” Despite Shane’s complaints about his position, he assures that it is a good jumping off point to get to a more technically demanding position. In a hands-on industry such as IT, nothing teaches better than real-world experience.
Tip 3: Change your mind.
Stevesz began his career in IT in the late 1970s, advancing from part-time to full-time for over two decades. His company, LAN Doctor, Inc. lends him experience from “break/fix” tasks to computing strategy and network installation and maintenance. His advice is to go after what interests you, but also what will sharpen your skills and knowledge. Get experience troubleshooting hardware and software problems at a help desk position. Don’t approach anything without a dose of flexibility; change your path as your interests change.
Stevesz shares his own story of changing interests: “I started out as a dBASE III programmer (if anyone still remembers dBASE at all) then actually went to a help desk type of situation for a company that had its own propietary database program. From there I went to a company that basically did break/fix work. We also branched out into servers and networking, and other things surrounding computers, concentrating on small, very small, businesses, and I am still at that company, as a partner in the business.”
Always keep feelers out there and if something strikes your interest, send a query! IT is an industry constantly in motion, requiring you to be just as fluid in your pursuits.
Tip 4: Give a little, get a little.
Labnuke99 is an IT manager at a company that designs, manufactures and sells electronic components and assemblies primarily to original equipment manufacturers (OEMs). He provides support for international and domestic data network and security. Labnuke99 advises searching out a symbiotic relationship with a non-profit organization; what the work lacks in remuneration will be made up for in other ways. Learn how organizations require and use IT services on a tight budget.
Aside from gaining invaluable real-world experience and making contacts as an IT professional, you can get satisfaction from giving charity and advocacy organizations the help they need. Treat any volunteer prospect like a potential job: Send along a polite inquiry and resume along with your motivations for helping out. Be sure to detail your availability; promising hours or skills you can’t deliver will only hurt you in the end. Do your research beforehand and outline specific areas where you’d like to help them improve: fix their confusing website, for example, or ask them for what IT headaches you can help them with. Build your resume, connections, skills and karma points all in one fell swoop.
A couple weeks ago, I read an article over at Tech News World that got me thinking about endpoint security, and how it has become like a spy movie, where the biggest threats are often coming from the inside. We asked you if your endpoint security focus was shifting, and how you’re managing that. And you answered…
Technochic’s company has disabled USB ports, CD writers and implemented strict mobile policies when connecting to email servers. But it’s worth it, she says, because with a little awareness and a plan of action, endpoint security can protect against both internal and external threats.
Jinteik’s company, in addition to disabled USB and CD/DVD drives, has locked up the BIOS. They don’t allow OWA and have strict printing permissions policies. Vendors can’t connect to their networks, nor are there any wireless devices in their office.
TomLiotta’s company has taken into account the need for USB ports to connect devices such as a mouse or a keyboard. Aside from antivirus, they audit regularly via automated monitoring, authenticate and authorize according to position, and maintain a security policy. They’re in a unique spot as a software vendor of network security, meaning their employees are more knowledgeable than most on how to cause trouble. Their solution? Focus more on quality employee relationships and education rather than software and hardware obstacles. He makes a good point:
Fundamental safeguards will always be in place. This protects from mistakes made by the best of us. But clear authentication combined with authorizations that are capability- and object-based, for employees who have a solid relationship with their employer and who always have access to a good security policy, into systems with strong monitoring, all tend to make most issues disappear.
Chippy088, or David, agrees with Tom, and highlights the way many obstacles can be circumvented. Active Directory controls are in place primarily for normal users, and disabled physical ports can sometimes be accessed in safe mode. His suggestion? Virtual machines:
[They are] 90% more effective in controlling users trying to bypass security controls, as the local physical devices are not used in saving/printing.
What does bother him though are mobile devices, and finding the balance between control and flexibility for utilizing off site access points.
Mitrum got right to the point:I disabled USB ports and CD/DVDROM, bluetooth, micro SD, MMC, etc. in my organization.
Do some of these strike you as too strict or not strict enough? Share your thoughts and your own endpoint security policies in the comments section!
It’s the interior design of IT enterprise, but someone has to do it. Rethinking and – ultimately – redesigning your network to wireless should happen in several phases, the first of which is planning. Like my father says, Planning prevents piss poor performance. The 5 Ps, if you will. This month we’re focusing on wireless networking and the steps necessary to untangling you from the wires of yesteryear.
Why WiFi (WhyFi?)
In general, creating a wireless network allows for greater flexibility, from in-office mobility to network configuration. Wires can limit signal strength and hinder the reorganization and growth of your network configuration, making costs saved from increased productivity and the ease of reconfiguring your network an important consideration. Lisa Phifer of Search Networking reports that “even ‘average’ companies [that invested in 802.11n] reported 114% growth in WLAN traffic, a 60% increase in wireless coverage throughout their offices, and a 44% reduction in downtime.” Can you afford anything less?
A walk through the WiFi technologies
Today’s industry standard, 802.11n has improved upon the bandwidth of previous standards by using multiple wireless signals and antennas or MIMO (multiple input multiple output) technology. It can handle data rates up to 100 Mbps with a better range and backward compatibility with 802.11g. There was definitely a build-up to the 802.11n, which features the fastest maximum speed, best range and most resistance to interference.
The follow-up to the pioneering 802.11, 802.11b offered a respectable 11 Mbps, similar to a traditional Ethernet. Because the radio signal frequency was unregulated, there was room for interference from microwaves and cordless phones utilizing the 2.4 GHz frequency. The follow-up 802.11a had an increased bandwidth of 54 Mbps. The regulated frequency, around 5 GHz, solved the interference problem but introduced a shorter range easily obstructed by walls.
Enter the 802.11g, birthed from the attempt to combine the pros of 802.11a/b, with a bandwidth of up to 54 Mbps. Backwards compatible with 802.11b, it operates on the 2.5 GHz frequency, making interference a consideration. The higher cost was worth it for the combination of a fast maximum speed and a wide range.
Avoiding Piss Poor Performance
Now it’s time to remember the 5 Ps and begin planning. It’s important to understand that an overhaul such as the wired-to-wireless transition can be too much for an all-at-once solution. Knowing what level of upgrade your company is capable of is the first step. Search Networking’s Lisa Phifer suggests that “upgrades must be budgeted and scheduled over time, resulting in an incremental network infrastructure migration.”
Consider the hardware you’ll have to replace such as Fast Ethernet switches (with Gigabit Ethernet switches). Because 802.11n uses more electricity than previous access points, you’ll need to consider an upgrade in switch ports to provide more power. Get in on the 2.4 GHz versus 5 GHz debate; do you need the wider range provided by 2.4 GHz or will you deploy dual band wireless LAN APs? If you decide to deploy both frequencies in your network, consider building the capability to guide clients to connect in 5 GHz when possible. Search Networking’s Shamus McGillicuddy suggests Cisco’s BandSelect or Aruba’s “band steering feature in its Adaptive Radio Management software.”
Sit down and do this:
- Try to predict the traffic load that wireless will deliver to your wired network and how it will grow over time.
- Plan for refresh cycles as clients and your workforce migrates to 11n devices.
- Plan and design your WLAN design and configuration to solve bottlenecks before they occur.
- Upgrading from Power over Ethernet to MIMO will require more power; consider this when planning upgrades.
- Be sure that legacy tools such as WLAN analyzers are upgraded and prepared to work with 802.11n.
- A larger network with increased capabilities such as the 802.11n also requires better security with increased monitoring. In order for your wireless network to meet its full potential, it must be fully secured and monitored from the get-go with 24/7 alerts of unwanted connections to the network.
In IT there is definitely a time and a place for the trial and error method, but when you begin migrating your wired network to wireless, avoid trial and error for maximum effectiveness and minimum headaches.
Wondering when you should update your wireless network from scattered unsecured “hotspots” to something a little more … serious? Then you’ve been wondering a little too long, according to today’s guest author Craig Mathias. Mathias is a Principal with the wireless and mobile advisory firm Farpoint Group. He can be reached at firstname.lastname@example.org. -Michael Morisy
With the IEEE 802.11n standard taking so long to develop (about seven years!), it’s no wonder that many are still having hard time with the fact that, yes, the standard is done, and all major WLAN system vendors are now shipping .11n products. In fact, many are reporting that 802.11n is now the bulk of their sales. Farpoint Group has, since the release of the interim 802.11n spec from the Wi-Fi Alliance, in fact been recommending nothing but 802.11n – it’s got so much higher performance in any dimension than any previous standard, from throughput to capacity to range, that there’s no point in riding the older horses anymore. OK, .11n is a little more expensive, but it has vastly improved price/performance – perhaps the vendors don’t want you to hear this, but it’s really downright cheap to install a .11n infrastructure, given falling prices due to competition and much lower components costs.
So it might surprise you to learn that the key issues (and, yes, there are some) surrounding 802.11n have nothing to do with the maturity of the technology or the price to get into the game, or even the radios required. The arguments have moved – here’s what are at the top of the list for many IT managers today:
- Operating Expense (OpEx)- Whereas the capital expense (CapEx) required to buy that new .11n system is nothing to sneeze at, most of the life-cycle cost in any large-scale WLAN deployment is in OpEx – management, maintenance, help desk, etc. And, of course, the reason for this is that so much of the work here is labor-intensive, and that labor is highly-skilled. This is why we always recommend that end-user organizations carefully examine the management capabilities of systems under consideration. There are often major differences in functionality here between different products, and it’s important to verify functionality against one’s list of requirements, including interface points with other operational systems, in order to minimize OpEx and thus total cost of ownership (TCO) down the road. As it is with end-users, productivity is the name of the game in operations as well.
- Assurance- Finally (or not; we’ll get to another cautionary note shortly), it’s important to consider a wireless LAN assurance solution in planning your new .11n deployment. These systems perform a broad range of functions – regulatory compliance monitoring, intrusion detection and prevention, security audits, and even provide, in some cases, such vital services and spectral analysis to detect interference. There is a slow trend afoot to integrate assurance services into core WLAN management systems, and that makes sense. But you may want a system entirely distinct from your WLAN infrastructure, in a belt-and-suspenders fashion.
Of course, there’s more. I’m particularly interested in convergence functionality, for example, to allow handoffs between cellular systems and wireless LANs. And while I noted above that differences in radio performance alone aren’t the gating issue that they once were, it’s still vital to do a little testing to make sure that you’re getting the performance you need. And attention must be paid the rest of the network value chain as well, particularly in terms of deploying gigabit Ethernet – .11n will swamp fast Ethernet without even trying very hard.
A full checklist could take several pages, but I hope my core message is clear: 802.11n is all you should be considering at present. The benefits are there, and the issues, as I noted above, have shifted. No matter – it’s full speed ahead, pun intended, to 802.11n.
Windows 7 migrations…Yeah, yeah, yeah – what a boring topic. Be that as it may, boring doesn’t mean unimportant. If you’re going to move forward with Windows 7, you might as well take the time to do it right. If that’s not enough incentive how about this: Time management experts have found that one minute in planning can save 5 minutes in execution. Now that’s some ROI you can bank on!
But where do you start with Windows 7 migrations? As I wrote in a previous post, Microsoft has their own toolkits. However, before you even go down that path, TechTarget has several good resources on the subject worth checking out. Here’s the rundown:
Finally, here’s a piece I wrote last summer on why it’s important to lock down XP before moving to Windows 7:
Sure, these topics aren’t all that glamorous but they sure can help you set yourself and your business up for success with your Windows 7 migration efforts.