Sure, technically Interop Las Vegas 2010 started yesterday, but it really kicks off in full today, with the keynotes beginning tomorrow. Whether you’re trekking into the Las Vegas sands or following from afar, the IT Watch Blog brings you the Interop experience, minus the hangover, starting with our exclusive interview with Interop Las Vegas 2010 General Manager Lenny Heymann.
And Heymann had some surprising things to say: That the networking market has been dull, that virtual events aren’t eating away at live conferences and that early indicators are showing desktop virtualization might get very hot, very soon.
But perhaps the most surprising thing Heymann told me? He’s been helming Interop for so long, he doesn’t know how many he’s run anymore
“I joined the company that produces Interop [TechWeb] in 1997,” he said. “And somewhere in the early 2000s I took over as GM.” When pressed to be more specific than that, however, Heymann’s memory failed him.
It’s a forgivable lapse since Interop, which has grown to become one of the most influential IT conferences, has found itself changing to keep its top dog status.
“Going back to the early days, Interop was created to help foster, literally, interoperability in the networking area,” he said. “As we grew and networking grew in importance, we took on a lot more of the IT and business technology story. Currently we cover the whole gamut of business technology that IT professionals need.”
So what are the hot topics at Interop 2010?
Cloud Computing: “There’s so much attention and valid interest [in cloud computing], and it’s not just an outsourcing problem,” said Heymann. “There’s networking and security issues and it goes on and on. … Cloud is now number one with a bullet in terms of people wanting to know more information.”
Virtualization: Just behind cloud computing, and closely related to it, is an avid interest in virtualization: Server virtualization, network virtualization, and even desktop virtualization, which is just now beginning to appear on many IT departments’ radar screens.
Sunday and Monday are both seeing dedicated virtualization tracks and sessions. Surprisingly, Heymann said that this year saw a spike in desktop virtualization, with 33% of respondents in a pre-Interop poll indicating they were investigating deploying the technology.
Networking: Although Interop was created as a deep technology networking conference, Heymann admitted that the focus had often shifted to other, “hotter” technologies over the years.
“The last few years in networking have been a little bit slow: The market has been slow overall, but one thing we’re looking forward to this both from the conference and exhibitors is more excitement, in terms of how vendors are going to deal with virtualization and cloud computing,” he said. “The other thing that’s going to add energy to the networking world is a whole lot of mergers that have taken place over the past year.”
3Com and HP, Avaya and Nortel, Cisco and countless assimilated pieces. Heynmann said that with all the shake ups, Interop is the one place end users can come and hear the whole story straight from the horse’s mouth.
Whether you’re trekking out into the Las Vegas sands or watching safely from your home base, IT Knowledge Exchange has you covered: The Enterprise IT Watch Blog is teaming up with SearchNetworking.com to bring you the most comprehensive Interop coverage. Follow Interop coverage on the IT Watch Blog or check out SearchNetworking’s coverage on the Network Hub. If you’re in Las Vegas yourself, shoot me an e-mail at Michael@ITKnowledgeExchange.com: I’d love to hear what you’ve learned, seen or heard, and we have plenty of free swag to give away to IT Knowledge Exchange members!
This sponsored guest is from Cisco Systems and was written by Mark Leary, a Cisco senior strategist and Chief Marketing Officer – Network Systems.
We work, live, play, and learn in a world that has no boundaries and knows no borders. We expect to connect to anyone, anywhere, using any device, to any resource — securely, reliably, seamlessly. That is the promise of Borderless Networks.
In order to fully deliver on this promise, Cisco is advancing along three critical fronts – Workplace Transformation, Technology Leadership, and Operational Excellence. In this blog, let’s focus on Workplace Transformation and how Cisco is working to accelerate technology advancements and customer success along this front.
The new workplace is visual, mobile, and in-the-moment. And for the end user, the quality of the experience is everything. Network service levels are judged purely by the quality of this “customer” experience. And remember, this “customer” could be running any application, in any location, from any device. And in this borderless world, this “customer” may be internal workers, business partners, or external end customers.
Here, the network must respond to onrushing video traffic demands, meet the rising expectations of an increasingly mobile user population, and last, but not at all least, ensure the integrity of critical business exchanges – no matter the business application, no matter the user location, no matter network conditions… no matter what!
The New Workplace is Visual.
Video drives high impact in business and applies high pressure on the network.
And video is growing dramatically across all networks — big and small, local and global, public and private. 65% of the traffic on Cisco’s own network is video – and that figure is climbing!
Medianet capabilities ensure that your network not only delivers a high-quality video experience to the end user when needed, but also helps ensure that your network is ready for on-rushing video demands and applications. Built-in intelligence offers adaptability and predictability, reliably and transparently providing high-quality media experiences to any device on the network. With medianet, your Borderless Network optimizes traffic flow and bandwidth utilization, while reducing the impact of network congestion. And it does all this while lowering the complexity and risk associated with video rollouts.
You can bet that Cisco IT makes effective use of medianet capabilities within our own internal network. Without medianet readiness, our own network and IT staff wouldn’t be able to deliver the kind of experience our workers, partners, and customers have come to expect from Cisco’s network and systems.
Cisco’s medianet solution is further bolstered by key networking services such as Multicast, QoS, and mobile VideoStream. These key services combined with such critical built-in switch and router device capabilities as port buffering and video streaming are put to effective use by a broad range of video applications, ranging from the interactive Cisco TelePresence and WebEx to the one-way IP video broadcast or distance learning session.
Providing both the infrastructure and demanding applications such as Cisco TelePresence has taught us a lot about how to do video right across the network. Cisco Validated Designs and public cloud-based services from Cisco and service provider partners are offered in support of the specific needs of our customers. Cisco IT’s experience, proven network engineering practices and designs, specialized support services, and an extensive ecosystem further heighten the positive impact of business video within Borderless Networks.
The New Workplace is Mobile.
The explosion of mobility devices, users, and applications raises the stakes in wireless scalability, security, and support requirements.
Radio frequency (RF) interference is a growing concern for organizations deploying indoor and outdoor wireless networks. Left unaddressed, RF interference can result in low data rates and throughput, lack of sufficient WLAN coverage, WLAN performance degradation, poor voice quality, and low end-user satisfaction. This, in turn, can lead to decreased network capacity, network downtime, and potential security vulnerabilities from malicious interference. Cisco offers industry-leading RF management capabilities in its mobility solutions. Cisco technologies such as ClientLink, Spectrum Intelligence, and VideoStream ensure clear and efficient communications at all times – even when faced with onrushing mobile video demands.
With rising demand for connections and increased traffic volumes, mobility solutions that readily adapt to meet new service demands are of high value. Cisco offers unmatched scalability and service intelligence in wireless networks. For example, Cisco’s 802.11n product portfolio combined with 802.11n design and support services are evidence of our drive to boost performance, while easing adoption of new mobility capabilities. Cisco’s context-aware capabilities and strong security services are yet other areas of mobility leadership.
And given Cisco’s architectural approach to Borderless Networks, it should come as no surprise to find Cisco leading in bringing together wired and wireless networks. Here, we offer physical integration via Catalyst-based WLAN controllers and ISR-based WLAN access points. Integration is further heightened through the consolidation of wired/wireless security systems and policies, enabling access and user protection no matter the connection.
The New Workplace is In-The-Moment.
Application-layer traffic controls within the network work to ensure consistent response times and “always-on” access. QoS, NBAR, NAM, and PISA technologies are just some of the key technologies Cisco provides in support of the quality application experience. Each provides its own unique boost to the customer experience.
For remote and branch offices, Cisco’s WAAS solution optimizes applications across the WAN. Offered as both a standalone solution and one that is integrated with the Cisco Integrated Services Router, WAAS minimizes WAN traffic volumes, supports efficient content delivery and user exchanges, and enables new remote systems such as video kiosks. Most importantly, WAAS provides for a first-class application experience to the remote user. Given that this remote user can often times be a worker interacting with end customers or an actual end customer interacting with on-line transaction systems, a good experience translates directly to heightened customer satisfaction – and likely, repeat and referral business.
Taking application optimization one step further are solutions aimed at supporting applications within the network itself. This not only improves the performance of applications, but also helps minimize the complexity associated with deploying and operating network-based applications. The Cisco ISR hosts applications via its Application eXtension Platform (AXP) – a service module inside the ISR. The Cisco ASR serves as a web conference manager via its WebEx Node service module.
The aim of these network-hosted applications is to ensure the effective and efficient delivery of premium service – no matter location or application.
As stated previously, network service levels are judged purely by the quality of the “customer” experience with any and all networked applications. No borders. No limits.
Are you doing all you can to make sure your “customers” are completely satisfied with their networked experience? Do onrushing video applications represent a significant challenge to your network? Do mobile users and devices represent opportunities or threats to your network? Do additional business applications represent heightened productivity improvements or lowered service levels? Let us know where you stand. And how we can help improve not only your status quo, but also your future outlook.
Ty Kiisel, today’s guest author, writes about project management for @task as an “accidental” project manager. He shares many of the lessons learned from personal experience and conversations with customers-hopefully demonstrating that it really doesn’t matter what industry you’re in, the rewards of successfully executing project-based work are universal.
As covered wagons made their way along the Oregon Trail headed for the gold fields of California or the lush timber of Oregon, whenever the wagon wheels started to squeak, the wagon driver knew it was time to stop and grease the squeaking wheel-before it failed. Along the trail there wasn’t the equivalent of a Firestone or a Goodyear to get a replacement. A failed wheel was inconvenient at best or a matter of life and death at worst.
Originally, I think this phrase implied that problems should be fixed as soon as they are identified. But over the last 100 plus years, the term has become associated with “the person who complains the loudest gets what they want.”
Organizations that rely on a “first come, first served” approach to making project decisions, or worse, the “whoever screams the loudest” approach, might get projects out the door-however are they the right projects.
For most organizations, keeping people busy isn’t the challenge, it’s keeping people busy doing the right things. Evaluating every potential project based upon pre-determined metrics ensures that the business value of every project will be evaluated objectively, regardless of the stakeholder. Making knee-jerk reactions to the demands of influential stakeholders can be expensive. Spending valuable resources working on projects that provide minimal value can be catastrophic.
Establishing a process that requires every potential project to demonstrate its value based upon pre-determined criteria gives executives confidence that they are making well-informed project decisions. Some of the important questions to ask when evaluating any project should include:
1. What are the high-level objectives of the project? It’s not uncommon for a project to morph into something very different from what was originally intended. Specifically identifying the goals of every project help project teams, sponsors, and stakeholders stay on track.
2. What are the estimated costs of the project-and the anticipated rewards? Without the answer to these questions, it becomes difficult to determine if the potential project will provide any business value, let alone the greatest value.
3. Does the potential project align with the mission, vision, and values of the organization? Individual projects must represent the execution of strategic direction if the desired result is to maximize every dollar spent in the pursuit of the greatest ROI.
4. What are the risks associated with pursuing the project under consideration? If potential project risks can be identified and evaluated while in the consideration process, actions can be taken to mitigate risk and increase the probability of success.
In a perfect world, every potential project that provided business value would be pursued. However, anyone doing project-based work understands that there always seems to be more work than there is time or resources to do it. Which is why establishing a method for evaluating every potential project is so important. Measuring and considering every potential project based upon merit is the first step to effectively managing demand-and an important component to project success.
Should the squeaky wheel always get the grease? Probably not.
I spoke with Interop’s general manager Lenny Heymann yesterday, and asked him straightforward what he recommend attendees do to make sure they get the most of the upcoming conference. His answer would send shivers down school boys’ spines everywhere: “Do your homework.” And while I was hoping for some secret path to networking enlightenment in Las Vegas next week (maybe a smoke-filled hotel backroom where Cisco, HP and ProCurve join hands to provide the be all, end all?) I suppose it’s solid advice, even if it’s not too inspiring. But I’d love to hear how you do your pre-conference homework, and what you’d like IT Knowledge Exchange to bring back when we venture forth with the SearchNetworking team next week to Las Vegas’ shiny sands and networking enlightenment.
Join in the discussion below, in the IT Knowledge Exchange member forums (and get your points!) or e-mail me directly at Michael@ITKnowledgeExchange.com. I’d love to hear from you, particularly if you’re going and would like me to throw some free swag your way in return for your own conference wisdom.
Earlier this month, Rivka Little, site editor for SearchNetworking, wrote a guest post about evolving networks and how it’s no longer enough to move bits from one end of the tube to the other: Today’s networking professionals need to master virtualization while becoming captains of the cloud. The former is so new it’s not yet in my spell check yet and the latter’s so ill defined you might as well have “Make it so” as the modern networking professional’s prime directive.
What’s a poor bit jockey to do? We at IT Knowledge Exchange feel your pain, and so we’re trying to pull together the best resources to help keep you at the top of your game today with an eye towards what you’ll need to succeed tomorrow, kicking off with our in-depth, on the ground coverage of Interop Las Vegas.
- Avaya’s Kevin Kennedy: SIP is the New IP
- This year, Interop Las Vegas is for the hungry
- Cisco’s quiet 122nd acquisition set the stage for CleanAir announcement
- Cisco’s CleanAir defends against WLAN-mutating microwaves, Bluetooth and more
- What recession? Interop: Las Vegas sees paid conference attendance spike 30%
Also, be sure to check out SearchNetworking’s coverage on The Network Hub blog.
Enough learning, how about some doing.
The ultimate networking professionals toolbelt:
Wondering if you’re using the right tools to make your job as easy as possible? I asked the IT Knowledge Exchange community for their recommendation, and they responded with their top networking tools and utilities. Entries included:
And many more. Read the blog post on building the ultimate network security and troubleshooting toolkit.
Frequently Asked Questions about Networking:
- Possible Networking Issue
- Is the CCNA, MCSE or RHCE a better networking certification to get jobs with high salary?
- Unmanaged vs managed switch
- What is a Cisco console cable?
- How to reset the Cisco Catalyst Switch to Factory Defaults
For a deeper dive, we’ve picked out some top reading recommendations, including reader reviews from IT Knowledge Exchange community members and bloggers, to help you really understand that obtuse topic your project lead wants you to master by last Monday.
Books on Enterprise Networking:
- Nmap Network Scanning
- Wireshark Network Analysis
- Practical Virtualization Solutions: Virtualization from the Trenches
Have another suggestion for this list? E-mail me at Michael@ITKnowledgeExchange.com or leave it in the comments.
Want to connect directly with experts? Read their blogs to hear straight from the horse’s mouth: The pioneers, chearleaders and critics of cloud computing are often just a click away, and we’ve helped to organize the best of the best.
Top Networking Bloggers:
- Editorial networking news blogs
- Networking analyst blogs
- Networking user blogs
- Networking vendor blogs
The list is a work in progress, so leave a message in the comments if you know of a blog to add.
What else would make this guide useful to you? Let me know in the comments or e-mail me directly at Michael@ITKnowledgeExchange.com with any additions, corrections or suggestions.
As part of Networking Month on IT Knowledge Exchange, we’ve been highlighting networking questions in the community, and I thought it might be interesting to put together a FAQ on the most asked networking questions and most viewed questions:
- Possible Networking Issue What started as a routine inspection of slow file transfers became a great guide to general network troubleshooting and discussion about whether too many problems are unfairly lumped as “network” by other groups.
- Is the CCNA, MCSE or RHCE a better networking certification to get jobs with high salary? The answer isn’t quite clearcut.
- Unmanaged vs managed switch It might be a bit basic, but it’s also one of the most read questions we have, with an in-depth answer that explains all the benefits unmanaged can offer.
- What is a Cisco console cable? Actually a blog post in the form of a question, David Davis’ explanation dives in with pictures and a clear explanation of how this ubiquitous piece of equipment works.
- How to reset the Cisco Catalyst Switch to Factory Defaults Another blog post, this how-to by Yasir Irfan perpetually is in the most-viewed pages in the site because it does one thing really well: Help IT professionals get on with their jobs.
Today’s guest post is from Pete Schlampp, vice president of marketing and product management at Solera Networks.
The Identity Theft Resource Center (ITRC), the organization that tracks data breaches, reports 211 data breaches so far in 2010, and 26 of these involve financial services companies. According to ITRC, many incidents actually occurred in 2009 but are just now being brought to light. Waiting weeks and months or longer to discover network breaches hardly seems acceptable. Even worse, the majority of these breaches involve an unknown number of records exposed. Why? Because there is no way to “replay the tape” and see exactly what was stolen or touched. Existing tools only record metadata and signature matches. Without good situational awareness we’re dealing with the equivalent of digital hearsay.
Demands for better situational awareness-knowing and seeing what’s happening inside the network-has led to new technologies and the commercialization of tools that increase the resolution of what can be seen and known by security engineers. Technically, the ability to record network traffic and carve it into perceptible chunks has been around for years. Ask a network troubleshooter about tcpdump and wireshark and they’ll gush like a carpenter over his favorite hammer. Network Forensics companies have taken these technologies and created more robust, accessible, and maintainable tools. At the same time, costs to store and process one hour of GigE network traffic has dropped from tens of thousands of dollars to hundreds in the past five years. The Network Forensics space is rapidly evolving and highly differentiated. Performance, scalability, and the analytical applications available can vary widely.
A recent survey indicates that many network security professionals don’t yet understand the need for Network Forensics and what it can do for them to provide situational awareness. Using security tools based on signatures developed to block known security threats or those based on a collection of metadata spewed off of “dumb” network devices, security engineers aren’t equipped to know even simple details like who is on the network; what applications are being used; and what content is being transferred. This lack of perception forces enterprises and government organizations into reacting to security threats instead of proactively policing their networks and stopping threats before damage can occur. Improved situational awareness can lead to better security and higher resiliency against the backdrop of increasingly advanced and persistent threats. As security engineers become enlightened through situational awareness, they know and see exactly what’s happening on the network and can control it.
Wikipedia defines situational awareness, or SA as “the perception of environmental elements within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” In the world of physical security, we think of SA as seeing, hearing, and otherwise sensing the world around us. Major advancements in SA came about with the advent of CCTV and the ability to remotely “see” what was happening, both live and in the past. On the network we’re not responding to incidents in real time because we’re neither able to see them nor have we been able to go back in time and replay events to understand the current situation better.
Without situational awareness, IT security teams respond to incidents in the same way a fire department responds to fires – a bystander calls up to report the problem. By far, the most typical way for an incident on the network to be discovered is by a third party or employee notifying IT that something strange has happened: for instance intellectual property has been found outside the network, a server is running slowly, or a bad actor is bragging about their success. The 2009 Verizon Business Data Breach Investigations Report finds that over 80% of network breaches are discovered either by third parties or by employees going about their regular work activities – not by our existing automated security devices. Because of this, incidents are discovered late, lack data and detail, and lead to higher costs to organizations, industries and individuals.
Network security teams that make the shift to improved situational awareness are empowered with the insight that true security comprises. Stop reacting to security issues and start seeing the problems and knowing what to do about them before an incident can become a network security issue. New tools, particularly network forensics appliances from companies like Solera Networks and our competitors, can reduce the occurrence of network breaches, augment understanding of network alerts and incidents, and enable security teams to recognize exactly what data may have been compromised, so they can proceed consciously and confidently to provide better security.
After writing about the importance of network forensics in securing your corporate front lines, I thought it might be helpful to pull together some of the top tools for actually helping protect and maintain your network. Have a suggestion to add to our list? E-mail me at Michael@ITKnowledgeExchange.com or update our community Wiki. Continued »
Networks are the corporate crime scenes of today. Just ask Google, TJX, or any one of the thousands of companies that have seen their networks turned against them. IT professionals need to step up their game when it comes to dusting for digital prints.
Fortunately, they’ve got a set of tools that (almost) makes CSI look amateur, and some of the best tools have fallen into the domain of networking professionals, according to Gartner’s John Pescatore (bio)
“We have a broader array of tools called data forensics, and one half of that is network forensics and the other half is computer forensics, which you can put on every PC and server. The network products have the major major advantage of it’s very expensive to put software on everybody’s PC and server, and people … can very often disable that software,” he told the IT Watch Blog recently in an interview. “The network tools are more widely used because of those advantages.”
Rather than watching every bit on every computer, network tools watch the choke points: They can see what users are downloading and uploading, e-mailing and IM’ing, and even record all that data for later playback, like a closed circuit television camera or omniscient network DVR.
But just like CSI, today most of the security lapses aren’t discovered until somebody turns up dead or, in corporate terms, the customers start complaining and stuff starts breaking.
Today’s guest post comes from Rivka Little, site editor of SearchNetworking.com and my former colleague from my day’s in TechTarget’s networking group. I asked her if she’d be willing to write a guest post for this month’s look at all things networking, and she agreed, taking on challenging topic of how networks are going to matter as we enter the age of the cloud, virtualization and other technologies that promise to push IT out of the office. You can read more of Rivka’s reporting and analysis at The Network Hub blog.
The network has been forgotten. At least that seemed to be the case over the last couple of years amid the hubbub surrounding server virtualization and cloud computing.
But stark realities have brought the network back into focus. Server virtualization and cloud computing aim to dynamically deliver applications and data — provisioning and de-provisioning resources on demand. There is no doing that without a new kind of network.
Networking teams are no longer solely responsible for architecting, implementing, securing and managing LANs and WANs. Now they find themselves implementing unified data center fabrics that converge storage and data center networks so that applications can flow freely from its resting state through to the WAN and LAN.
Networking teams also find themselves responsible not only for routing and switching between physical machines, but deep within the server. They are managing traffic both within the server between virtual machines and among physical servers in multiple data centers.
This will eventually lead to the creation of virtualized network components that sit atop of physical switches and routers. Among SearchNetworking readers surveyed in 2009, 40% said managing virtualization would be a top priority for the networking team in 2010.
Networking pros will also use these virtualization management skills in building out cloud computing networks. Network architects find themselves building both private clouds and hybrid clouds that interconnect private data center resources with those in public facilities.
Among SearchNetworking members, 35% say their companies are considering building an internal cloud in 2010 while another 30 percent say their networking resources will be affected by supporting external cloud services.
The shift to the cloud model will require users to push intelligence away from the data center core and into the layers of the network. Enterprises will seek intelligent edge switches with baked in access control, security, visibility and management. Routers and switches will act as servers that have built-in application-specific firewalls and bandwidth management. This type of manageability will mean the ability to burst up and shrink down bandwidth according to application demand.
Finally, all this shifting in technology comes along with a serious change in culture for networking teams. More than ever before, IT organization silos are fading and networking, systems and storage teams are pressed to work together to enable unified fabrics, virtualization and cloud computing networks. As this transition occurs, networking pros will have to make their voices heard and claim their central role. That shouldn’t be too difficult as networking technology has already surfaced as the lifeline of these emerging technologies.