Networks are the corporate crime scenes of today. Just ask Google, TJX, or any one of the thousands of companies that have seen their networks turned against them. IT professionals need to step up their game when it comes to dusting for digital prints.
Fortunately, they’ve got a set of tools that (almost) makes CSI look amateur, and some of the best tools have fallen into the domain of networking professionals, according to Gartner’s John Pescatore (bio)
“We have a broader array of tools called data forensics, and one half of that is network forensics and the other half is computer forensics, which you can put on every PC and server. The network products have the major major advantage of it’s very expensive to put software on everybody’s PC and server, and people … can very often disable that software,” he told the IT Watch Blog recently in an interview. “The network tools are more widely used because of those advantages.”
Rather than watching every bit on every computer, network tools watch the choke points: They can see what users are downloading and uploading, e-mailing and IM’ing, and even record all that data for later playback, like a closed circuit television camera or omniscient network DVR.
But just like CSI, today most of the security lapses aren’t discovered until somebody turns up dead or, in corporate terms, the customers start complaining and stuff starts breaking.
Today’s guest post comes from Rivka Little, site editor of SearchNetworking.com and my former colleague from my day’s in TechTarget’s networking group. I asked her if she’d be willing to write a guest post for this month’s look at all things networking, and she agreed, taking on challenging topic of how networks are going to matter as we enter the age of the cloud, virtualization and other technologies that promise to push IT out of the office. You can read more of Rivka’s reporting and analysis at The Network Hub blog.
The network has been forgotten. At least that seemed to be the case over the last couple of years amid the hubbub surrounding server virtualization and cloud computing.
But stark realities have brought the network back into focus. Server virtualization and cloud computing aim to dynamically deliver applications and data — provisioning and de-provisioning resources on demand. There is no doing that without a new kind of network.
Networking teams are no longer solely responsible for architecting, implementing, securing and managing LANs and WANs. Now they find themselves implementing unified data center fabrics that converge storage and data center networks so that applications can flow freely from its resting state through to the WAN and LAN.
Networking teams also find themselves responsible not only for routing and switching between physical machines, but deep within the server. They are managing traffic both within the server between virtual machines and among physical servers in multiple data centers.
This will eventually lead to the creation of virtualized network components that sit atop of physical switches and routers. Among SearchNetworking readers surveyed in 2009, 40% said managing virtualization would be a top priority for the networking team in 2010.
Networking pros will also use these virtualization management skills in building out cloud computing networks. Network architects find themselves building both private clouds and hybrid clouds that interconnect private data center resources with those in public facilities.
Among SearchNetworking members, 35% say their companies are considering building an internal cloud in 2010 while another 30 percent say their networking resources will be affected by supporting external cloud services.
The shift to the cloud model will require users to push intelligence away from the data center core and into the layers of the network. Enterprises will seek intelligent edge switches with baked in access control, security, visibility and management. Routers and switches will act as servers that have built-in application-specific firewalls and bandwidth management. This type of manageability will mean the ability to burst up and shrink down bandwidth according to application demand.
Finally, all this shifting in technology comes along with a serious change in culture for networking teams. More than ever before, IT organization silos are fading and networking, systems and storage teams are pressed to work together to enable unified fabrics, virtualization and cloud computing networks. As this transition occurs, networking pros will have to make their voices heard and claim their central role. That shouldn’t be too difficult as networking technology has already surfaced as the lifeline of these emerging technologies.
Looking to boost your networking career, or simply bone up on the latest trends and topics in your field? You’ve come to the right place: I’ve polled analysts, authors, IT pros of all stripes and, of course, our very own member community.
Top reads so far (click the title for more information):
- Nmap Network Scanning
- Wireshark Network Analysis
- Practical Virtualization Solutions: Virtualization from the Trenches
If the Internet really is a series of tubes, it’s network engineers who keep those tubes running. But how, exactly, do you keep it running today while keeping an eye on what you’ll need tomorrow? Get the experts’ opinions from our picks of the top networking blogs. Know of a great networking blog we’ve missed? Sound off in the IT Knowledge Exchange forums, where other IT professionals are chiming in with their thoughts.
- The Network Hub by the Search Networking editors
- Window on WANs by the Search Enterprise WAN editors
- IT Trenches by Troy Tate
- David’s Cisco Networking Blog by David Davis
- Network Technologies and Trends by Yasir Irfan
- Security Corner by Ken Harthun
- Network Administrator Knowledgebase by Michael Khanin
- Cisco Blog
SQL injection attacks are a constant thorn in the side of security practitioners, claiming the dubious distinction of being the attack vector for the largest U.S. ID theft case ever. And while tools are arriving on the scene to help businesses root out potential problems before the bad guys do, there’s plenty of attack vectors just waiting to be exploited. The latest case? An image floating around the web showing a, er, creative license plate cover designed to foil traffic cameras:
Will it work? Unlikely (see commentary on Gizmodo), but it’s a good reminder that attacks can come from the darnedest places. It’s also a nice throwback to the classic SQL injection comic from XKCD:
This is a guest post and request for information by Johanne Murray, a Canadian research student at National Cheng Kung University in the Business Management Department.
The concept of green information technology has been around since 1992; however, like other green products, it has not experienced a tremendous growth rate. Green products in general have not followed a traditional product adoption model.
Stakeholders have now begun to put more pressure on companies to adopt greener technology systems, of which many companies are making the claims they are either in the process or have already done so. However, in my research so far I have not been able to get passed the managers that are making these claims. This makes it difficult to understand the adoption process and the perceptions of personnel.
Although it is of great interested to speak with project managers, directors, CEOs that are mandating Green IT within their companies, it is difficult to base academic research on these claims alone. So far there has been little academic research based on the people, the personnel and management that are working with these newer and greener IT systems.
Are companies really going green or is it something that is just stated to appease stakeholders? Where are the personnel that are adopting these new systems? Are you supposedly using green IT in your work place? Does it make a difference? Does it make your work easier? Was it easy to adapt to? Do you feel there was sufficient resources and education in order to adopt this technology?
This research is attempting to answer these questions; however it has been a challenge finding people that are supposedly using recently adopted Green IT.
Is green IT just a myth? Is it a case of company green washing or are these companies really transferring their technology?
This academic research is dedicated to the advancement of Green IT.
If your company has mandated and adopted Green IT and you are using green computers or other information technology that is more environmentally friendly than its predecessor please take a minute to fill out this survey:
Or if you are working for a company that claims it is adopting Green IT and you are not so sure and have issues with their claims please contact: firstname.lastname@example.org
All respondents’ details will be kept confidential.
If you are a manager/CEO and you are truly proud of your Green IT technology transfer and would like to make your company an example for others to follow please contact email@example.com to become part of an exciting case study. This would include telephone interviews of a variety of personal affected by the transfer. This is cutting edge research and would be a great opportunity for companies tell their Green IT story to the world.
Should you take the mobile plunge if you haven’t already? While many companies’ workforces are wired with the latest gadgets, IT departments have occasionally been hesitant to jump on board for a number of reasons. Today’s guest post – by Tim Scannell, editorial director of sister site TechnologyGuide.com – outlines why 4G might mean it’s time to re-think corporate wireless strategy.
One strong theme at this year’s CTIA conference, which wrapped up last week, was the evolution of mobile broadband. Loosely defined, this refers to everything and anything traditional broadband offers, but accessible through a mobile device – in the case of the CTIA cognoscenti, this specifically related to small, handheld systems.
Up until very recently, this has pretty much been a blue-sky concept since there were only a handful of devices that were really capable of providing a rich browsing experience. Also, the browser software still had a way to go in terms of development, and cellular infrastructures just weren’t up to snuff when it came to fast and reliable service.
All of that is changing rapidly, however. At CTIA, there were a number of interesting and powerful devices that were capable of operating across emerging 4G wireless networks – like the HTC EVO 4G, that will reportedly be the first smartphone available in the U.S. with built-in WiMAX (which, in many cases, provides much more reliable wireless access than cellular, particularly in congested urban areas). The new HTC system also runs Google’s Android OS and has a very large high-resolution display.
Newer classes of mobile computers – like netbooks – are also catching on in the small business and small enterprise markets, especially as the numbers of mobile workers increase and efforts continue to extend customer relationship management and internal information resources out to the point of customer contact. The number of online consumers who own a netbook has increased from 10 percent last year to 15 percent this year, with most people using a netbook as a second device and not a replacement to a notebook computer, according to a recent survey.
Tablet PCs are also finally finding their niche in mobile business computing, spurred by interest in the soon-to-be-shipped Apple iPad. Fifty seven million “media tablet PCs” are expected to ship in 2015 according to analysts at ABI Research, which is roughly thirteen times the 4 million expected to ship this year.
As prices for mobile system plummet and the wireless infrastructure becomes more reliable and varied with converged connectivity options (cellular, WiFi, WiMAX, etc.), it makes sense for companies of all sizes to have a mobile solutions strategy. Yes, there are some significant challenges, like mobile management, service and support, security and developing a collaborative strategy. But the benefits can be huge in terms of getting closer to customers and speeding transactions.
Since every company is different, it is difficult to come up with a ‘one size fits all‘ return on investment (ROI) formula that can quickly validate initial purchases, training, support and other functions. Focusing too much on the cost of implementation and operations can also be a mistake – especially in a down economy where the mandate is slashing expenses rather than adding to expenditures.
To get a more realistic and long-term picture (as well as convince upper management a mobile strategy is working), an increasing number of companies are instead measuring the efficiencies created by a mobile strategy. At a major magazine distribution company, for example, the goal is to use mobile solutions to increase the efficiencies of every worker by about 5% – saving about 24 minutes of wasted time per day. When you translate that savings in time into dollars and extend it across hundreds or thousands of mobile workers, the cost savings can be in the millions, notes the IT director.
The real question to consider then is not how much implementing mobile systems and services will cost, but what the expense will be if you do not take the plunge and make mobile broadband an integral part of your business strategy.
If you’re a fan of Tim’s writing, be sure to check out his soon-to-be-launched blog, Technology Guide Lines, hosted right here on IT Knowledge Exchange.
If web apps are really going to take off in the way Google hopes, the Big G knows it needs to tighten up the security holes on web apps at large, no matter how elegant their own solutions are.
Enter skipfish, Google’s automated web security scanner, which was launched Friday by Michał Zalewski in a post on the Google Online Security Blog:
Today, we are happy to announce the availability of skipfish – our free, open source, fully automated, active web application security reconnaissance tool. We think this project is interesting for a few reasons:
- High speed: written in pure C, with highly optimized HTTP handling and a minimal CPU footprint, the tool easily achieves 2000 requests per second with responsive targets.
- Ease of use: the tool features heuristics to support a variety of quirky web frameworks and mixed-technology sites, with automatic learning capabilities, on-the-fly wordlist creation, and form autocompletion.
- Cutting-edge security logic: we incorporated high quality, low false positive, differential security checks capable of spotting a range of subtle flaws, including blind injection vectors.
For those worried that this just further enables malicious script kiddies to hunt out and play with gaping holes in your poorly designed web app (or that budget SaaS vendor your CIO chose), Google included this disclaimer:
First and foremost, please do not be evil. Use skipfish only against services you own, or have a permission to test.
We’ll see how long that lasts, but at least there’s another (open source, no less!) tool from a reputable company to help catch problems before someone else does. If you’re interested in a second opinion, the folks at Securi Security also took a closer look at skipfish, and left with a favorable impression.
This guest post is by Doug Willoughby, director of cloud computing for Compuware Corporation. Members interested in writing a guest post about an IT topic near and dear to their heart should e-mail community editor Michael Morisy at Michael@ITKnowledgeExchange.com.
Smart phones, social networks, Web 2.0, cloud computing, borderless applications: information technology is being reshaped by waves of disruptive innovations. Some enterprises will benefit from disruption, while others will be buried by it. Enterprises who position themselves to capitalize on innovation will benefit the most. To capitalize on innovation, successful enterprises are moving information technology to the forefront of product strategies, from a supporting role to the means of monetization.
In this environment, established IT organizations are likely to find their greatest challenge is their previous successes. To deliver on time and on budget, successful IT organizations have optimized their processes based on assumptions about the environment. Innovations and the changing role of IT throw many of these assumptions out the window. As a result, the primary challenge to established IT organizations is how to adapt their existing best-practices and tools to fit their new role.
One such best practice is the Agile development process. The Agile process has enabled IT organizations to be more responsive when faced with changing business requirements. In the past, the Agile process has been used exclusively by development teams. To address the operational requirements of ubiquitous access, social applications and cloud computing, successful IT organizations will become “Agile” across the application lifecycle-from requirements to operational deployment. These organizations will have all the advantages of being first to market, while enjoying lower operational costs.
To support an Agile application lifecycle, IT organizations need an integrated suite of tools that support each stage of the application lifecycle.
The Internet is the New Data Center
For the last fifty-years, IT has been optimized around the assumption that applications and data centers are fairly static. Lines of business owners generate requirements. Developers build and test applications in isolation before passing them over to the operations group for deployment. Once deployed, the operations group is responsible for monitoring the performance of the applications to detect and resolve problems.
Web applications are making the Internet the new enterprise data center. Research indicates that typical Web applications depend on six or more services located outside the direct control of the applications’ owners. These are not new-fangled cloud services, either. Rather, they are the bare minimum of ordinary Web services required to deliver consistent and compelling user experience, and include such Web staples as content distribution networks (CDN), ad distribution networks, content management servers, analytic services, streaming media services and other types of service delivery platforms.
The Web breaks many of the assumptions built into the traditional application lifecycle model. For example, thorough testing of many Web applications before their deployment may be a practical impossibility. There can be too many variables for proper coverage testing. Client browser compatibility (which release, what operating system, which plug-ins and what configuration options) is just the beginning. End user experiences will vary depending on where users are physically located–not just due to network latency and bandwidth, but also because the quality of service provided by Web services can vary by geography.
Performance Management is Critical for Agile IT
This is not saying that Web applications do not need to be tested before they are deployed. Instead, it suggests a new strategy that mirrors the Agile process that many development organizations have already adopted. This strategy is called the Agile application lifecycle.
The Agile application lifecycle is a cross-organizational approach that brings together line of business (LOB) owners, developers, and IT operations. Agile teams work closely with LOB owners to define requirement and IT operations to detect and resolve problems quickly. The approach is characterized by smaller, more frequent releases. New functionality is tested as extensively as practical, but greater reliance is placed on detecting and resolving problems once the Web application is deployed.
To be successful, Agile teams need integrated tools that provide seamless visibility from development to testing and on through to the management of deployments. LOB owners are concerned with end user experience, developers need end-to-end visibility into composite Web applications, and operation teams need to determine business impact and deep dive tools to resolve problems quickly.
Capitalizing on Innovation
Enterprises that see change as a normal business driver will benefit most from disruptive innovation. Adopting an Agile application lifecycle strategy enables these organizations to react quickly to change. Agile is a cross-organization approach, so teams need tools that encourage the integration of concerns from LOB to operations and enable them to move seamless from requirements to deployment. The end-to-end visibility and end user experience context provided by integrated application performance management tools, such as those offered by Compuware, are a critical component of any Agile application lifecycle strategy.
Doug Willoughby is currently the Director of Cloud Computing for Compuware, a leading provider of APM tools for Web applications. Prior to Compuware he was at Sun, which he joined in 1988 and where he participated in the development and marketing of some of Sun’s most pioneering and disruptive technologies, including Project Spring, Distribute Objects Everywhere (DOE), NextStep/OpenStep, and Java. Willoughby was also part of the team of 14 engineers and architects who developed “network.com,” Sun’s first utility computing offering
Compuware offers an integrated suite of application performance management tools. Gomez Actual User Experience XF and Vantage for End-User Experience provides LOB owners visibility into real user experience. Gomez Web Load and Performance Testing and Cross-Browser Testing tools, combined with Vantage for Java and .NET performance tools gives developers clear insight into how applications will perform when deployed. IT operations can leverage the full suite of Gomez and Vantage tools, including Vantage for Business Service Management, to understand the business impact of service problems.
- “Our intranet is optimized for Netscape Navigator 4.0.”
- “I hate our intranet with a rage as white hot as the sun.”
- “Just noticed that a hidden corner of our company intranet has a page with several lines marked “under construction” for, oh, 6 years or so.”