November 5, 2013 5:05 PM
Posted by: Nlyte Software
By Mark Harris, vice president of marketing and data center strategy at Nlyte Software
Ours is a remarkable, interconnected world, where mobile devices are now more plentiful than people, and the expectations are for anyone to have access to any information at any time. The concept of instant gratification has never been so pronounced. And this isn’t limited to our personal lives, it transcends into business just as much. Inside most corporations, remotely accessed applications are now the key to running their business, and so the demands upon their data center, and that of the company’s use of cloud services are rapidly growing. Much of this capacity growth is being addressed through the dynamic abstraction of computing inside data centers and in the cloud or any combination of these services.
Together they have become a critical component in any company’s fiscal livelihood. In short, our lives are being quickly transformed with access to information at any time night or day using mobile portals, which are being driven hard by a ton of back-end technology which itself is transforming to account for dynamic capacity and their underlying cost structures. While the front-end portal devices are becoming ubiquitous and highly available, when they’re tightly managed supporting back-end services falter, business stops.
The current trend towards addressing this need for robust dynamic capacity is to virtualize the data center infrastructure across the server, storage and networking domains and to span private and public clouds at the same time. This creates what is referred to commonly today as a Software Defined Data Center (SDDC). SDDC allow capacity to be added or removed without the knowledge of the users or applications. These dynamic data centers provide computing as a utility, rather than as a rigid structure, and in fact each service may be delivered differently from moment to moment.
The good news is that this abstraction will drive your overall computing costs down when done properly. For instance with each virtualized server (Guest) instantiated a physical (Host) server, that Host can provide additional application computing capacity with no need to purchase a new piece of hardware. With virtualization, the devices themselves become utilized at a much higher rate than previously seen. The bad news? With an increasingly virtualized data center, your success is even more susceptible to problems that arise from the lack of visibility of physical devices, and the complexity of power and cooling load fluctuations.
Put on your structural engineering “hard hat” and ask yourself this about your current data center: is it built on a solid, scalable, well-understood and well managed hardware foundation, or is it instead vulnerable, resting precariously on the assumption that there is enough physical capacity to handle whatever loads are placed at any point? Remember that historically when data center services failed, most users still had the ability to continue the majority of their work since their local devices contained significant local computing capabilities. In the traditional computing model of years past, data center failures were inconvenient, but not catastrophic. With the new paradigm of always-on connected portal access to backend computing services, data center failures stop business. Read on to understand more of what can be done to assure backend services continues to be available.
Today’s Data Center Challenge
Let’s consider what’s now occurring within the data center environment. As we know, the demands upon business applications and their information access are growing dramatically, while the actual floor-space available for computing is not. At the same time the economics of computing are driving the need to reduce all costs. As a result, data centers are being updated with much higher capacity and higher density equipment. As more servers are “crammed” into a rack, each rack draws more power and thus generates more heat, which requires more cooling (and even more power) per square foot. Virtualization layers on top of this highly dense structure. In roughly half of the data centers today, virtualization is being used to increase the utilization of this dense hardware to levels that are unprecedented in the past. All of this is driving the need for a well-managed and actively planned Data Center infrastructure. Devices need to be placed in service quickly, maintained accurately, and then decommissioned when their value declines. It’s really about lifecycle management. There is simply no room for low performance or aging equipment in this new high-density structure.
Are you managing the lifecycle of your data center asset devices, or are they sitting ghost-like, taking up precious space and power? How accurately are you able to plan and forecast your data center’s capacity? Are you executing fiscal asset planning which takes into account capital depreciation cycles and the resulting opportunity for technology refreshes? Do you have repeatable processes and operations to consistently execute all of the above? Do you know how much any compute transaction costs your business today and tomorrow? In the abstracted data center where failure can paralyze business, these questions demand your attention and every Datacenter manager and operator needs to consider whether their core foundations are ready for the transformations now underway.
The Challenges for SDDCs
The challenge for the owners and operators of Software-Defined Datacenters is that in today’s world, resources are finite. Long gone are the days where structures were over built, oversized, over provisioned and over cooled. In that world, data center capacity was a discussion about the active devices that were to be chosen. The underlying structure, since it was over built, was essentially infinite in nature. Enough headroom existed that new applications and new requirements would never come close to consuming all of the space, power and cooling available.
In the SDDC, abstractions exist across the board which allows work to be moved or migrated from place to place in real-time. Instances of servers can be started or moved dynamically. While this dynamic capability sounds good at first, the consumption of resources underneath also changes, and it is this very set of resources that are now no longer infinite in nature. It is quite conceivable that the movement of workloads in a datacenter could trigger catastrophic failures associated with power and/or cooling overloads.
As abstraction takes hold, the need for active management of the physical layer grows. In short, the adoption of SDDC technologies requires the deployment of DCIM to assure physical, logical and virtual layers are coordinated.
A Must Have for the Software Defined Data Center: DCIM
The solution for managing the foundation of your data center business is Data Center Infrastructure Management (DCIM), defined by Gartner as the integration of IT and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. It extends from the physical facilities up to and including the virtualized instances of capacity. DCIM is the purpose-built suite of software intended to provide management of the physical components in a data center, the devices, the resources, etc. DCIM provides this lifecycle management over long periods of time. Today’s DCIM suites can be tightly integrated with the other management fabrics that are in place. Most importantly, modern DCIM suites are strategic extensions to the corporate IT management fabric.
The current trend of virtualization is here to stay. When virtualization spans across the servers, networks and storage components it creates a Software Defined Data Center, and the SDDC must be built upon a solid foundation of actively managed resources. DCIM suites are the means to actively manage the physical aspects of the data center in the context of virtualization, assuring that the dynamic changes associated with abstraction are planned for. Regardless of the architecture deployed for virtualized servers, storage and networking, it is the physical infrastructure that supports your business and without it, your business stops.
September 20, 2013 5:42 PM
Posted by: Workspot
, Cloud Computing
By Ty Wang – Co-Founder and Vice President for Workspot
As consumers, we’re melded to our mobile devices. The gadgets themselves and, of course, the apps are part of our personal identities. As workers, we dream of the same level of personalization — and we’re often willing to pay for it by using our own devices for work. Employers have wholeheartedly embraced the always-on employee and the supposed productivity gains to be derived from letting people use their mobile weapon of choice.
But there’s a problem: Working on a mobile doesn’t always work. Let’s face it; using SharePoint on mobile is a soul-crushing experience today. People in enterprises today struggle to work on their personal tablets because the vast majority of companies have applications and data that are secured behind firewalls, on locked-down laptops and within restricted wireless networks.
Billions of dollars and years of time have been invested designing and building this infrastructure to be secure and reliable. Now, consumers want this same infrastructure to be available on what are predominantly consumer devices designed for messaging, media consumption and games.
Struggling to work
Let’s look at the example of Karin, an account manager at a 1,000-person consumer apparel company. What happens when Karin gets an urgent phone call to review documents and approve a transaction? She takes out her company-issued laptop. She fires up her VPN and accesses it by inputting the code from her security token. She then logs into SharePoint and Outlook email. Depending on her laptop and connection speed, this process might take five minutes.
Karin would love to do this work on her personal iPad, the device she always carries in her purse. Compared to the company laptop, it’s slick, speedy and seamless. But it’s not an option, thanks to Justin in IT. There are 600 million people like Karin working within traditional enterprises today.
Now, Justin in IT is not the bad guy. He’s spent countless months integrating systems to provide access without data leakage and millions of dollars to manage identity and risk. And now the CIO is talking about BYOD?
If Justin has to lock down personal devices with mobile device management, or MDM, it defeats the whole purpose of BYOD. Karin will hate it, because Justin will know everything she does on her personal device. Except for email, every new secured business app will need special permission. And she doesn’t like the idea that Justin will be able to lock or wipe her device should she lose it or leave the company.
For his part, Justin isn’t happy about the prospect of installing new MDM servers in his datacenter and taking on the added overhead of managing mobile devices.
What the people want
Today’s enterprise users want their core business applications to be as seamless as their consumer apps. They don’t want to be guests on their company’s wireless network; they want to log in once and access all their work applications and secure documents. They don’t want to have to switch between browser tabs for different mobile business applications; they want to stay within a single, mobile workspace both applications and documents. They don’t want broken user experiences when trying to access behind the firewall documents from SaaS and other applications.
The ideal BYOD scenario would also create a clear wall between personal apps and content, and information related to work applications and documents. That way, if Karin’s iPad ever is compromised, Justin can wipe company data and leave her vacation photos intact.
This ideal BYOD scenario would also leverage all that great work Justin and the IT team has done to integrate backend systems into business workflows and then make them accessible through web applications via VPN and authentication systems.
Enterprises don’t need still another, parallel system for delivering applications to brought-in mobile devices. In fact, most already have what they need to deliver applications and data effectively from behind the firewall. And here at Workspot, we happen to believe that this isn’t a distant future, but something that can happen today.
Getting rid of blind spots
Let’s assume that Karin and Justin can agree on one place for work with the rest of the mobile remaining personal. One of the lingering issues for deploying BYOD is the lack of context and visibility of activity on the device itself. When Karin says, “My SharePoint is running slow on my iPad,” and so what does Justin do? He goes to his application server and network monitoring systems and verifies that no outages occurred. This still does not address the fact that Karin, and perhaps many of her peers, are experiencing slowness of apps because Justin simply does not have the actual user experience data on what is happening on each individual mobile device to make the adjustments to improve Karin’s and others user experience that leads to increased productivity.
August 28, 2013 5:32 PM
Posted by: Dr. Werner Hopf
, IT assets
SAP has been at the forefront of ERP providers with its vision for supporting the “real-time enterprise.” For the last several years, the company has consistently been introducing technologies that are steering business strategy in this direction. Data, obviously, is at the core of this vision. More specifically, SAP’s vision is predicated upon helping businesses store, organize and leverage all this data in ways that dramatically enhance understanding, engagement and responsiveness to strategic objectives.
However, the ever-accelerating speed of business and the quest for highly granular data analysis coupled with explosive data growth has created significant performance bottlenecks in the ways that corporations manage and retain documents and data across their extended enterprises.
These bottlenecks can have crippling effects on organizations using SAP ERP applications, especially if the business is also consolidating data management through a third party Enterprise Content Management (ECM) system. With data generated at historic rates and real-time query applications demanding instant accessibility, an ECM-centric approach to data management can drag down business performance.
The reason? ECM’s raison d’etre was to provide secure, long-time storage, image capture and document management. But it wasn’t designed for the ever-increasing data volumes being generated today. The infrastructures of such solutions have continued to expand substantially, increasing maintenance overhead – and support costs – with the addition of a wide range of extended components such as web content management, business process management, workflow design tools, social collaboration and digital asset management, to name a few.
In the drive towards “real-time” operational and competitive responsiveness, core ECM functionality such as managing the content lifecycle from creation to destruction, delivering, preserving and storing data are essential. In many instances, traditional ECM systems are not able to keep pace when it comes to process changes and can actually slow operations. Plus there may be latency or synchronization issues between the ECM and ERP systems. In some cases, they have become all-encompassing and are zapping both administrative and budgetary resources. ECM solutions were developed as generic retention management systems and don’t have any real ability to be fine-tuned to fit customer-specific needs.
It used to be that the more complex a system was the better; today it is simplicity that takes center stage. Companies are now focused on the value of content rather than the technology. And for solutions, they want a short implementation cycle.
Organizations have made a significant investment in ERP offerings and SAP systems, in particular. The ERP has, in essence, become the ‘system of record.’ But now, there are twenty-first century solutions already implemented that take a more efficient approach when it comes to data management designed to fit SAP’s vision.
These low cost and flexible systems enable archiving to meet performance and legal requirements and support a data management strategy. Dolphin’s best practices now include a much lighter weight option. It uses SAP solutions as the application layer and hard disk storage systems for long-term data retention to deliver fast, reliable access to stored content.
Implemented as “stateless” translation software between the SAP solution and long-term storage devices for archived data, data and documents do not have to be maintained on the content management server. Instead, the server maintains only configuration data and optionally cached data of stored documents for performance improvement. Persistent information resides on the storage hardware layer and within the SAP solution. With stateless implementation comes the advantage of existing backup and replication functionality for both the SAP system and storage hardware. Perhaps best of all, this requires no additional configuration.
The result is a content management system that can be implemented faster and administered simply with rapid information retrieval for business purposes, simplified administration and upwards of 95% data compression for economical storage. Another bonus is that implementation can be completed in just a few days.
The vision of “real-time” operational responsiveness is closer than ever before. The challenge is how to achieve it in the face of ever-growing data generation and consumption. The best approach is likely not with software-heavy ECM systems that were constructed to support the data management requirements of the 1990s, but rather, with lightweight, focused components that emphasize both speed and a lower, more attractive total cost of ownership.
What’s your take on the future of this approach? Is it time to sunset ECM?
August 12, 2013 7:34 PM
Posted by: Nlyte Software
By Matt Bushell
We’ve all heard of Moore’s Law, that every two years the number of transistors on an integrated circuit (and thus processing power) doubles. Gordon made this observation way back in 1965 and it became popularized in 1970 – so it shouldn’t be news to any of us. In fact, it is a reality in our daily lives. Think of a smartphone and your cellular phone contract: every two years you are eligible to get a new one from your carrier, and your carrier is willing to underwrite the bulk of the cost of it. Why? Because the company can make enough money on your contract to cover the hardware cost of the phone. Think about this for a moment – the services are more valuable than the hardware. Now let’s put that in our back pocket (metaphorically – I wouldn’t recommend putting a phone in a back pocket lest its screen crack.) The other reason is, the cellular carrier knows that a new model will come out in two years – which gives them an opportunity to entice you to stay with them. A lot of mobile phone innovation has to do with processing power (you could probably land 1000 lunar missions with your iPhone, but don’t quote me on that).
So to recap, technology is advancing, processing power doubles every two years, and service is more valuable than hardware. Now let’s talk about servers and the data center. Your data center has a LOT of servers. Thousands of them. You may consider it a badge of honor keeping them all running (you should), but are you getting the message from your cellphone provider, in your case, the IT Finance department, that every three years, they’re ready to replace that server? Think about that – EVERY THREE YEARS YOU NEED TO REPLACE A SERVER. A thousand servers, 333 replaced each year. A third of your data center turned over each and every year. “Really?” you may ask. It’s simple: think of a server’s cost and the power it consumes and the workload it executes. The hardware (and software) costs typically account for only 15 to 25 percent of the overall costs associated with installing, maintaining, upgrading and supporting a dedicated server. So, if you can save HALF of the power a server consumes with it’s newer cousin in three years (Moore’s Law), and the fixed cost is a fraction of its overall life, why wouldn’t you change out that old server (“but it’s only three years old”) with a new one?
Your Finance department has other laws it needs to abide by, specifically Generally Accepted Accounting Principles (GAAP). GAAP states that capital equipment like servers cannot be fully depreciated in less than three years. There are financial impacts of leaving a server in place past its 3 year schedule: the lost tax benefits or the lease over runs (plus lease cost acceleration beyond the initial term), the cost escalation of service and/or warranty extensions. Or what about the cost of having abandoned servers that take up space and idle power artificially consuming precious capacity that could be freed up, or worse, may lead you to think you need to expand your data center. Or the costs of having dual sets of gear as the replacement process is slowed by inefficiencies, you pay for lease/warranty/depreciation/service/power – all of it – all overlaps.
So EVEN IF Finance drove the rest of the organization to tightly manage asset replacements every three years, they’d still be violating ole’ Gordon’s Law already by an entire year. But at least they’d be doing their best to not be a law breaker. The question is, are you?
Matt Bushell leads Nlyte’s Product and Corporate Marketing efforts as their Sr. Product and Corporate Marketing Manager. Prior to joining Nlyte, Matt worked at IBM for more than ten years, helping to launch multiple products in their Information Management and SMB groups.
August 12, 2013 6:13 PM
Posted by: Renodis
According to a recent study by McAfee, 13,000 different kinds of malware were found targeting mobile devices in 2013 compared to less than 2,000 in 2011. In addition, Symantec’s 2013 Internet Security Threat Report indicates that one waterhole attack infected 500 organizations in a single day! Since this threat shows no signs of slowing down, it’s more important than ever to learn how to not only spot mobile viruses and phishing attempts, but how to enact a level of control and protection for your mobile environment.
This topic will be a three-part blog series. In part 1, we will cover what mobile viruses and phishing attempts are, why it’s important to know about them, and how to identify them. In the upcoming second and third parts, we will cover the controls and protection you can put in place to protect your mobile device as well as corporate level protection with an MDM (Mobile Device Management) platform.
Mobile Viruses and Phishing Attempts: What Are They?
For this blog I will concentrate on the two most popular mobile operating systems: Android and iOS. In the simplest terms, I define mobile viruses and phishing as follows:
Mobile Virus – This is software that is designed to attack mobile devices. The most common types are Trojans and Worms.
- Application viruses are rare due to the stringent requirements and application review process Apple places on the applications that are allowed in the App Store.
- Other common ways to get viruses is by SMS messages. It is rare on non-jailbroken devices.
- If the device is jailbroken, then all bets are off. By jailbreaking your iPhone, you pretty much kill the defenses that were in place to protect you against most viruses.
- There are more malware applications found in the Google Play store than Apple due to the open source nature of the applications for Android devices. Android is becoming more and more of a hackers paradise. A good example is one of the latest viruses for Android, the “Android.Pincer”. “Android.Pincer” is a Trojan horse for Android devices that steals confidential information and opens a back door on the compromised device.
- If you have changed your settings to allow applications from unknown sources this also opens the device up to potential malware applications
- Just like a jailbroken iPhone, rooted Android devices can be more susceptible to viruses. But I must admit that users who have expert knowledge and know-how may prefer a rooted phone to gain full access of the phone to
- Allow you to load powerful apps
- Gives you better backup and restoration (for example using Titanium for backups)
- Gives you better performance and Custom ROMs flashing
Note: with regards to jailbroken and rooted devices, they can be controlled in different way depending on the ownership (corporate liable or individual liable). I’ll go into more detail on this topic in the second part of this series.
Phishing – This involves an attempt to get information from the user. Some of the key information that most are after are: passwords, names, addresses, Social Security Numbers, and any other confidential information.
According to Kaspersky Lab’s Anti-Phishing Component Detections, the top three methods to gain information from users are:
1. Social Networking Sites – at 35.93%
2. Search Engines – at 14.95%
3. Financial E-pay Organizations and Banks – 14.93%
The risk of phishing attempts hits iOS and Android about the same because these attacks are looking for information from the users. But apparently there is something unique about the way Apple delivers SMS messages that makes the iPhone particularly vulnerable to spoofing or smishing (SMS phishing) attacks.
Mobile Viruses and Phishing Attempts: Why is it Important to Know About Them?
You have made a significant investment in your mobile device in terms of time, money, and content. Losing this investment greatly impacts your daily life and productivity. The loss of confidential information also may allow opportunities for identify theft which can mean credit card fraud, employment related fraud, bank fraud, benefits fraud, wage fraud, just to name a few.
Mobile Viruses and Phishing Attempts: How to Identify Them?
Mobile Virus –
Watch for behavior changes. I know that may sound strange because most times that phrase is used in reference to people, but have you ever tried to do something on your phone and it did not perform the way you expected it to? Well, if you are saying that a lot you might have a virus.
- Look for SMS text messages that you did not send
- Take note if files have been removed that you knew were there previously
- Watch for files that do not open that you could open before
- Watch for applications that are trying to download onto your device from unknown sites or unsolicited sources (Android)
- Beware of unsolicited calls or emails asking for information
- If you are filling out an online form, pay close attention to the information being provided, especially if it is a site or an organization that you are not familiar with
- Beware of threating calls or emails, for example: “Your credit has been compromised please call us right away and provide” (asking for confidential information) or fill out this form (ask for confidential information) to resolve
Mobile Viruses and Phishing Attempts Part 1: The Summary
To recap, viruses (aka malware) are more prevalent on Android devices than iOS, but phishing attempts hit both types of phones almost equally. Also, looking for strange behavior can be a good indicator of an infection. Now that you know what mobile viruses and phishing attempts are, why it’s important, and how to identify them, stay tuned for part two and three of this blog that details how to protect yourself from viruses and phishing attempts and corporate level protection with an MDM.
August 7, 2013 3:57 PM
Posted by: Nlyte Software
, IT assets
by Mark Harris
As a general rule of thumb, the more connected your data center information management (DCIM) solution is to your existing IT management frameworks, the more strategic your DCIM deployment will be. The more connected the solution is, the larger the population of users will be. The larger the population is, the more financial impact DCIM will have in your organization.
DCIM is a fairly new category of IT management solution and is has loosely coupled definitions depending on who you are talking to. As a point of reference, DCIM is defined by the industry analyst firm, Gartner, as the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM will enable a common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
The analysts’ definitions vary a bit and in general are very broad which has created an environment with vendors popping up daily with their own interpretation. In general, DCIM tools provide real-time information about the physical assets found in a data center, the resources consumed, their life cycles and the mapping between virtual, logical and physical layers within the IT structure.
In a market such as this, many potential DCIM customers start their research and investigation of available DCIM solutions in a very hands-on tactical mode. DCIM comes in all shapes and sizes, and in fact includes everything from sensors and power monitoring, to full-fledged life cycle management suites. Each potential customer looks at the pieces that appear to have the most relevance to their management goals today. They start their DCIM journey looking for solutions that fit into their existing ways of doing business, new tools to be applied to their old approaches.
Luckily for a few, they are beginning to see the much bigger opportunity and their tactical thinking quickly transitions to attaining much more strategic value, across wider audiences, which is ultimately realized with the deeper integrations into their existing thought-processes, and their rapidly evolving services management structures. The very role of the CIO has change from “All services at ANY cost,” to a more value oriented approach, “The Right level of services at their Value to the company.”
DCIM is Strategic when connected to ITSM
DCIM delivers core value when deployed in a manner as to not be yet another island of features. Many of the DCIM industry’s early adopters started their journey with DCIM tools that were nothing more than enhanced drawing solutions, where visual fidelity reined king. In fact many of the currently available DCIM solutions and even the latest open-sourced OpenDCIM projects are basically enhanced spreadsheets with drawing built-in, which works great for documenting devices, but still misses the BIG opportunity.
Now we are starting to see a fundamental shift in discipline and accountability. Everyone wants to look forward rather than backward, and relatively few folks are trying to protect their previous ways. The ITIL-like approaches (anything that enables discipline and accountability) are becoming much more interesting in this new climate. That’s where DCIM thrives. In this new climate, new DCIM solutions must complement and integrate with existing management apps.
IT Service Management or ITSM is the process-based practices intended to align the delivery of information technology services with the business needs of the enterprise.
DCIM (when done right) forms the critical physical extension to ITSM. What do I mean by ’Done Right’? It’s when DCIM is deeply integrated with ITSM and becomes part of the critical path for change management. How will you know you’ve been successful? The number of users will grow into the dozens or hundreds, remedial accuracy will increase, and time-frames for labor-intensive operations will shrink, your costs for assets with be right-sized and your capacity planning will begin to include the physical layer itself, including all of the energy-related costs DCIM is not about drawing pretty pictures, its about extending the service management discipline to include the physical layer which has been largely ignored for the past 40 years. DCIM directly supports the transformation, which is already occurring across your IT structure.
Mark Harris is the vice president of marketing and data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy. He also shares musings on the industry at DCIMExpert. Nlyte Software is the independent provider of data center infrastructure Management (DCIM) solutions.
July 23, 2013 2:27 PM
Posted by: Renodis
Wireless Networks are better and bigger than they used to be, but consumer complaints about coverage have not tapered off. Mobile device technology is advancing at a breakneck pace, yet we complain about our smartphones. Why? The paradoxical nature of wireless consumer sentiment is rooted in the “inconvenience associated with convenience”. We are forever chasing the moment when we have the latest innovation. The barriers to achieving this temporary nirvana combined with the fleeting nature of it are the greatest sources of disdain for something that should consistently amaze us. Here are 4 perhaps surprising reasons you hate your smartphone.
Reason #1 You Hate Your Smartphone: It’s Too Smart, But Not the Smartest Available
The evolution of cell phones during the last decade has been nothing short of amazing. Advances in technology including the rise of the smartphone and mobile applications have occurred at an unprecedented pace for an industry. Many people can’t even keep up with the capabilities of their own smartphone. More and more I see that it doesn’t take long for our expectations to exceed what we’re able to get out of these devices. We are accustomed and have come to expect this rapid evolution of technology. The latest and greatest smartphone is only the latest and greatest for a month or two before there’s something out there that makes it obsolete. Once we have our device we get two months of being on top of the mobile food chain before being knocked off.
My suggestion in this area is to wait to upgrade until you understand the landscape and technologies that will soon be on the market. Check out www.ctia.org for upcoming smartphone technology that will allow you to stay ahead of the game. AND, Don’t be afraid to talk to every smartphone geek you can find (computer stores, or big box electronics stores are excellent sources of geek knowledge).
Reason #2 You Hate Your Smartphone: I Can’t Get the Latest and Greatest
We can’t upgrade again for two years without paying double or triple the cost of the device for an eligible upgrade. The reason for this is that the phone cost is heavily subsidized by the wireless carrier. That contract is a way to protect the carrier’s investment. This fact is the biggest frustration for mobile consumers. Locking people in a contract for two years on a device that will be obsolete or at least outdated in a few months hardly seems fair. So far no one has come up with a good solution. T-Mobile recently announced the end of contracts on smartphones. You now have the option to not have a contract on a high end smartphone on their network. So, in reality this means you’re still paying the unsubsidized cost of the device, but spreading it out over monthly installments. To me, this seems like you’re buying a phone from rent-a-center. You’re getting nailed either way for the full amount or worse, but it is spread out and doesn’t hit you all at once.
My suggestion for the consumer who wants to future-proof their phone is to check the value of the phone through the major recycling companies. The current buy back on a used iPhone 5 on any carrier is about $340 in excellent condition through www.E-cycle.com or www.Gazelle.com. You could conceivably upgrade to an iPhone 5 for $199.99 on a two year contract, keep it for a few months (in good condition) and upgrade to another smartphone at full retail (around $650). You would have the new phone for 3 months, then upgrade to another device for $310 when factoring in the credit on the iPhone from the recycler. It’s one way to take the sting out of upgrading at full retail when you have not reached your upgrade subsidy date.
Reason #3 You Hate Your Smartphone: My Carrier is the Worst
Several factors may contribute to someone’s allegiance to one carrier vs. another. When the market became saturated the carriers ran out of new customers. It became harder and harder for carriers to come by new customers without stealing customers from each other. To steal customers, the carriers had to focus harder on the strengths of their networks and the weaknesses of their competitors. First this approach happened with voice and today it’s happening with high speed data. We are inundated with commercials touting who has the highest population area covered or who has the largest, most reliable data footprint. With all of this information floating around in our head, we add our own experiences to the mix and usually pick a side. We decide to favor one carrier over another and hold contempt for at least one competing carrier. You will also see this kind of brand loyalty in other industries (like the auto industry) but in wireless it is approaching the divisiveness of politics in an election year. Though largely uninformed, and without scientific method we arrive at the conclusion that carrier x is terrible, and carrier y is the best. If you happen to have service with the former, you have decided your carrier is the worst.
My suggestion is to combine personal experience with third party knowledge like www.ConsumerReports.org, www.RootMetrics.com, or www.PCWorld.com to help gain a better understanding of your preferred carrier.
Reason #4 You Hate Your Smartphone: It’s Too Expensive to Repair/Insure
Insurance options can be very expensive. They are either worth it or a complete waste depending on your behavior, device replacement or repair options, and associated costs.
My suggestion in the Enterprise Account world is to negotiate upgrade eligibility waivers/early termination fee waivers during a contract renewal. These are valuable tools if you need to replace a damaged device in a pinch. If you are an individual liable subscriber, I suggest a rugged case and a third party repair option like www.Gophermods.com. Insurance is still worth it to some individuals whose potential loss in business outweighs the annual cost of insurance.
About the Author
Brian Dykhuizen is the Mobility Manager at Renodis and has over 12 years of experience advising clients in all areas of mobile support.
July 10, 2013 2:26 PM
Posted by: Renodis
Will your in-house data center solution stand up to power, weather, or security issues? It can be challenging for mid- to
enterprise-sized businesses to find the right Disaster Recovery and Business Continuation solution. Options range from a complete outsourced-managed solution to building out your current in-house data center. Colocation, which is having your equipment housed at a third party’s data center, fits right in the middle of the Disaster Recovery and Business Continuation spectrum. Colocation has become a popular choice for businesses that are looking to the “cloud” but are not in a position to implement a fully managed environment today.
If your strategy is to build out an in-house data center solution, you need to consider the long term costs and restrictions. Some of many challenges you may face if you decide to keep your data center in-house include: space limitations, lease expirations, unstable power, and lack of protection from weather and security. In-house centers tend to be antiquated, restricted and in the long run very expensive from a direct and indirect resources perspective. This is why colocation has proven to provide businesses with a variety of benefits that no other option can provide.
So, here are 5 Ways Colocation Can Redefine Your Business and deliver multiple benefits to your IT/Telecom Environment.
How Colocation Can Redefine Your Business: #1 – Mid-Size Businesses Gain Access to IT/Telecom Engineering Support
The first benefit colocation provides is unmatched support from certified professionals. Colocation facilities provide CCIE level support staff around the clock. Rarely do mid-size businesses have the financial flexibility to support an IT staff that is 24×7 or have the training and expertise to manage such an environment (customizing a solution to meet the needs of the business can be challenging so leaning on engineering support is key, paramount for success). With so many different back-up and redundant configurations, having resources assist with customizable solutions can make the transition to cloud/colocation invaluable.
How Colocation Can Redefine Your Business: #2 – Reduced Energy Consumption
Hardened data centers have a greater opportunity to reduce overall levels of energy consumption. Improved efficiency not only reduces the total amount of power the entire data center uses, but also decreases the total costs for businesses renting cages/space/racks.
How Colocation Can Redefine Your Business: #3 – Scalability
One of the most challenging issues for businesses is managing space for their physical IT assets. In-house data centers quickly become outgrown as businesses continue to add resources. Colocation allows for businesses to rent only the space they need with the ability to grow on demand. Colocation provides your business the ability to scale as needed.
How Colocation Can Redefine Your Business: #4 – Improved Infrastructure
Colocation centers are built from the ground up which allows them to utilize IT friendly designs. This includes multiple layers of redundancy to ensure continuous uptime and clean working environments that ensure servers are always operating as efficiently as possible. Most colocation sites are also weather-proofed buildings to mitigate weather related risks.
How Colocation Can Redefine Your Business: #5 – Access to Carrier networks
Colocation centers offer businesses a multitude of carrier networks to choose from. Many different flavors of providers and solutions are available which drives down the bandwidth rates. Multi-carrier environments also provide diverse access options to add yet another layer of redundancy.
So Now What?
While there are numerous options available to handle your Disaster Recovery and Business Continuation needs, colocation is one source that offers you all of these benefits. If you are uncertain where to go and what to do next, keep in mind there are many resources available to assist with nailing down strategy and putting together a plan for your business.
Jonny Wright is an experienced industry expert in Telecom Management and Enterprise Account Manager at Renodis.
July 1, 2013 11:45 AM
Posted by: Renodis
In a recent blog post I talked about the 6 Ways IT Leaders Can Be Relevant to the Business.
- Sit at the Executive table
- Partner with leaders of the business
- Act like a problem solver and solution provider, not an order taker
- Outsource where necessary, delegate when necessary
- Run IT like a business
- Hire the right talent and hold them accountable
But now let’s take a dive into a few examples of just how IT leaders are driving results and becoming relevant to the business.
Retail company ABC wants to improve the customer experience, improve customer data gathering, and increase supply chain efficiencies. The IT organization proposed using multichannel commerce (eCommerce, POS, Supply Chain, CRM) to help increase sales, make it easy for their customers to order their products online, and reduce customer wait times when visiting their retail locations.
Financial company ABC needed to reduce downtime of network for voice and internet communication, and also needed to improve customer data security. The IT organization proposed upgrades to their network bandwidth to improve performance, increased security, implementing a disaster recovery plan to protect data, and web based applications for use at branch locations to support customer data.
Company ABC in the manufacturing industry needed to improve internal collaboration and communication, increase employee productivity, and improve internal employee knowledge of products and services. The IT organization proposed and implemented collaboration tools and web based platforms to improve customer relationship management and internal communication/productivity between employees, customers, and partners.
Company ABC in the distribution industry wanted to increase field sales, improve the customer experience, and improve productivity. The IT organization proposed using mobility and tablets in the field to deliver web based training and web based platforms for order entry, resulting in improved employee performance, increased efficiencies for order entry, and an improved customer experience.
Bringing It All Together
Supply chain, inventory processing, collaboration, web based training, GPS, mobility, improving bandwidth to improve efficiencies and network performance… Whatever your business does, systems and technology can help drive results for the business and improve the experience for customers and internal end users. But all of these things need to be tied to the overall future strategy of the business, not just a way to use technology for the sake of using technology.
This is the job of IT to work with Executive teams and stakeholders to understand their strategies, goals, desired outcomes, key challenges, and how IT can help them achieve these results.
Ryan Carter is an experienced industry expert in Telecom Management and Enterprise Account Manager at Renodis.