Vendor Tech Talk

Page 1 of 3123

January 28, 2014  3:03 PM

3 Tech Trends for 2014



Posted by: Dr. Werner Hopf
CIO, SAP, Uncategorized

Look for Big Data to get more strategic and more important in 2014. With the New Year in full swing, leading industry prognosticators are predicting that IT spending on Big Data will continue to grow as even more businesses continue their quest to gain a competitive edge through new-found insights enabled by Big Data. What level will this spending reach? How will these efforts affect the distribution of power in the C-Suite? And most importantly: where are the areas of Big Data that are truly impacting cash flow and the all-important bottom line?

Big Data is Getting Bigger

Managing data for insights and balancing its value with the Total Cost of Ownership associated with maintenance, storage and analysis is fast getting more attention.

IDC predicts spending of more than $14 billion on big data technologies and services in 2014. That’s a 30 percent year-over-year jump. The research group is also predicting explosive growth in Big Data analytics services, with the number of providers expected to triple in three years. 2014 spending on these services will exceed $5 billion, growing by 21 percent

This year the cloud will play an even bigger role, with IDC predicting a race to develop cloud-based platforms capable of streaming data in real time. There will be increased use by enterprises of externally-sourced data and applications, too. IDC predicts spending on cloud services and the technology to enable these services “will surge by 25 percent in 2014, reaching over $100 billion.”  There will also be an increase in cloud-based datacenters, according to IDC.

C-Suite Alignment

A big point of discussion across the industry over the last year has been the increased involvement of the CFO in technology purchase decisions. Not only is this individual signing off on these expenditures, but often times, it is now the CFO driving the implementation of new technologies and services as part of a strategic plan to rein in costs and free up cash flow.

Further, we’re witnessing increased alignment between these executive functions. They are building stronger relationships and cooperating to accomplish IT projects that implement new technologies which reach the next level of cost reductions and enhance organizational agility.

Growing the Bottom Line

A grand promise of Big Data continues to be the ability to unlock new levels of predictive insights that will drive sales and increase revenues. It’s a costly crusade that has yielded varying results to date, but what more executives are now realizing is that there are other opportunities to grow the bottom line related to data.

Business Processes: One successful strategy is a more harmonized approach for optimizing business processes such as accounts payable and accounts receivable which result in significant financial and productivity benefits. For example, by leveraging a single end-to-end solution built to function within the SAP ecosystem, enterprises can not only maximize their existing technological investment in SAP solutions, but also reduce the lifecycle of invoices, gain faster access to cash and improve audit responsiveness. This reduces late payments and penalties, helps increase cash flows, and mitigate risk.

Data Volume Management: The thought goes that the more data one collects, the more material it will have to unlock insights and trends. With next generation in-memory databases such as SAP HANA, it’s now possible to process and analyze more data in real-time than ever before. However, storing all that data – particularly on SAP HANA — can be a costly endeavor. In fact, one lesson quickly learned by early adopters is that a complementary data volume management strategy such as Nearline Storage and archiving is needed to achieve a return on investment. We helped one customer implement Nearline Storage on SAP HANA and reduce its ROI from more than 15 years – but probably never with maintenance and upgrade expenses – to just two and a half years!. Even better, Nearline Storage is not restricted to SAP HANA. It can be deployed on existing SAP platforms to also achieve dramatic cost-savings while providing transparent and seamless access to stored data.

Extended User Interfaces: SAP is also testing the waters with a new approach to link the old with the new by delivering extended user interfaces to those not ready to upgrade their current SAP platform to SAP HANA. Dolphin is presently just one of ten certified SAP partners (out of more than 12,000!) to work on the SAP NetWeaver Gateway. This allows SAP customers to give their current system interfaces a facelift with new, modern interfaces and web application mobility that allow for better performance, which in turns means more productivity.

Is your organization leaving cash on the table with clunky Accounts Payable/Accounts Receivable processes or is your TCO spiraling? Both?  What Big Data initiatives do you have planned for 2014? What are your predictions for Big Data this year?

December 12, 2013  2:14 PM

Extending the HANA Journey: Why Data Management Matters



Posted by: Dr. Werner Hopf
CIO, SAP

SAP’s Tom Kurtz, VP Global Strategic Initiatives, SAP HANA Services, and collaborator, Robert Hernandez, SAP’s Director of In-Memory Services, North America, shared an insightful and helpful article on the SAP Community Network on October 18 about determining if and when SAP HANA is right for your organization.  They called it “The HANA Journey,” and rightfully so! 

The value of HANA is unsurpassed when it comes to faster analytics, modeling flexibility, near real-time data replication, faster reporting and data loading, and many new applications and enhancements. Most of all, it can help optimize the power of Big Data.

However, one area Kurtz and Hernandez didn’t address was managing and maintaining all of the data that is created, which is key to deriving a return on investment on HANA platforms. In their defense, the article was about determining when or if HANA is the right choice for an organization and how to approach the implementation; but, part of the ‘roadmap’ must also be analyzing infrastructure and looking at how to efficiently use and manage the HANA platform to maximize the investment. Thus, I’d like to offer an addendum to the roadmap laid out in their article.

With Data, Less can be More

While the value of HANA is faster processing, reporting and analytics – an advantage that can provide an important competitive edge – one downside which must be accounted for is escalating data growth.  Let’s face it: “he who has the most data” is not guaranteed to succeed. In fact, the only guarantee when collecting data without a management strategy is rapidly increasing Total Cost of Ownership and slower processing speeds.

The key to ensuring optimal performance is to reduce and organize the data that populates your system. Accomplishing this objective requires the implementation of an archiving and complementary data management strategy. At the core of this approach is a robust data archiving plan, which will also help ensure predictable TCO for newer in-memory technologies that, while more capable, are already acknowledged as more costly.

With so much data, determining the starting point for tackling a data volume management project can be daunting. Start with recognizing that certain data is more valuable than other information, preferably prior to moving anything to HANA. Understanding the specific value that different data types have for your organization is a cornerstone of archiving and an important facilitator to more effectively harness and maintain a lean and effective HANA solution.  

Benefits of Nearline Storage

Dolphin has seen a number of enterprises achieve success in moving large amounts of static data to a lower-cost, high-performance nearline storage (NLS) environment, complementing both the SAP NetWeaver Business Warehouse (SAP NetWeaver BW) environment and SAP HANA’s in-memory architecture.  Even better: during a carefully planned implementation, NLS can be utilized before HANA on the current platform to achieve immediate benefits.

NLS is a cost-effective, scalable option for storing large volumes of data. It is a critical component of a data archiving strategy that supports compliance to further ensure the right balance between performance and storage costs. Many CIOs understand the value of the transparent access to data that NLS provides, and its ability to maintain predictable database size and growth through archiving processes.

A major energy provider running HANA stand-alone and BW on HANA found that although HANA offers compression, the payback would be 15 years, or never when considering maintenance costs and upgrades, without adding NLS infrastructure. However, with nearline and an archiving strategy in place, the payback on HANA is now calculated at an astonishing 2.4 years.

Harness the Full Potential of Big Data

With advances in the ways that Big Data is analyzed almost every day, businesses are at the threshold of incredible insight, but first must create a data environment which will allow the full potential of data to be harnessed. Whether six months or several years down the road from an implementation, for many companies this means formalizing better data volume management processes and adopting data governance capabilities as part of the roadmap for the HANA journey.


December 3, 2013  4:59 PM

The NSA’s Data Center Adventure



Posted by: Nlyte Software
Data Center, DCIM

by Mark Harris, Vice President of Marketing and Data Center Strategy for Nlyte Software

Over the past year, the NSA has come under intense public scrutiny on its intelligence gathering and data mining practices, and if that wasn’t bad enough, in a recent set of articles published in the Wall Street Journal and Computerworld, among others, the very data center in Utah that was built to support these activities was literally melting down with some very dramatic and high-profile failures.  As taxpayers it is important for us to step back and consider huge dollar data center projects like this and understand the reasons why they fail. Forget about what the NSA is doing with these data centers for a minute. We should ask the question “how could it invest almost a billion and a half dollars of public funds into projects that fail so miserably on day one?” Most concerning, the same group chartered to deliver the Utah data center is also tasked to deliver another $900 Million data center in Maryland.

In a nutshell, the NSA simply does not understand the technology and business of hyper-scale data centers. It doesn’t have state of the art experience when it comes to hyper-scale computing, and has chosen to go it alone rather than follow some of the best practices pioneered by companies who build these hyper-scale centers like Facebook, Apple and Google. Somehow the NSA embarked down a path to use public funds to build these mega-data centers, without the fundamental understanding about what has changed in technology over the past dozen years, and how to manage these investments over time. It would appear that the NSA built these data centers as larger versions of the simple types it built a dozen years ago.  These new centers apparently were designed without provision for dense and highly utilized technology, such as blade chassis, virtualization, hybrid terabyte disk drives, huge in-memory databases and software-defined switches.  It would seem that the NSA built these centers based upon an old data center model, and without any strategic thinking in terms of modernization or capacity planning.  In the deployment phase, the Utah center was populated with vast amounts of the latest and most dense gear, and as it grew, it simply brought in more power to accommodate the unexpected demand. The ill-defined power structures literally melted and flamed as various loads were applied.

Data Centers today are highly dense and dynamic in nature. The amount of processing demand, the location of that demand within the data center and the physically deployed technologies each change over time. Whereas a rack in 2002 may have consumed just two kilowatts, in 2014 the modern dense equivalent may consume TEN TO FIFTEEN TIMES that amount when fully utilized. To make matters worse for poorly designed data centers, the amount of processing capacity found in a data center is not directly reflected by the amount of raw physical devices installed. It may vary dramatically throughout each day. What has happened over the last ten years is a separation between capacity and control. Virtualization does this for servers, and software-defined techniques provide this for storage and networks. What this means is that the link between physical assets and their business value changes over time. As a result of virtualization of processing, storage and networks, hardware can be refreshed or retired at will, without the need to impact applications. Hardware is simply added or removed and the amazing capacity abstraction technologies handle the re-provisioning and re-initializing of these new devices, bringing them into service quickly. But these same abstraction capabilities can wreak havoc on a data center that was designed for a simpler model of static computing where capacity and control were tightly connected. This is exactly what got the NSA in trouble and is still forcing rolling-outages and overall capacity reduction. The most recent estimate to fix the data center resource issues at the Utah center exceeds $100 Million.

As a publicly funded entity, the NSA is just one highly publicized example of the need to actively plan and manage data centers based upon modern best practices, from inception and throughout their long lifespan. Not only do the devices housed within a data center have practical resource requirements, each has its own lifecycle and value over time. These lifecycles demand change to remain cost-effective. Abstraction allows this change to happen easily, but the tools to manage all of this change must be deployed to allow data center operators to know what to expect, what to do next, and what the impacts might be. As a publically funded project, the NSA and all other government agencies would be well served to look at the hyper-scale architectures in use by some of its commercial counterparts. It should consider what pieces of their designs make sense to incorporate, look at what new tools could be deployed to better reflect the physical infrastructure lifecycle, and then look at capacity planning all the way from the cement up.


November 5, 2013  5:05 PM

The Software-Defined Data Center and Data Center Information Management



Posted by: Nlyte Software
Data Center, DCIM

By Mark Harris, vice president of marketing and data center strategy at Nlyte Software

Ours is a remarkable, interconnected world, where mobile devices are now more plentiful than people, and the expectations are for anyone to have access to any information at any time. The concept of instant gratification has never been so pronounced. And this isn’t limited to our personal lives, it transcends into business just as much. Inside most corporations, remotely accessed applications are now the key to running their business, and so the demands upon their data center, and that of the company’s use of cloud services are rapidly growing. Much of this capacity growth is being addressed through the dynamic abstraction of computing inside data centers and in the cloud or any combination of these services.

Together they have become a critical component in any company’s fiscal livelihood. In short, our lives are being quickly transformed with access to information at any time night or day using mobile portals, which are being driven hard by a ton of back-end technology which itself is transforming to account for dynamic capacity and their underlying cost structures. While the front-end portal devices are becoming ubiquitous and highly available, when they’re tightly managed supporting back-end services falter, business stops.
The current trend towards addressing this need for robust dynamic capacity is to virtualize the data center infrastructure across the server, storage and networking domains and to span private and public clouds at the same time. This creates what is referred to commonly today as a Software Defined Data Center (SDDC). SDDC allow capacity to be added or removed without the knowledge of the users or applications. These dynamic data centers provide computing as a utility, rather than as a rigid structure, and in fact each service may be delivered differently from moment to moment.

The good news is that this abstraction will drive your overall computing costs down when done properly. For instance with each virtualized server (Guest) instantiated a physical (Host) server, that Host can provide additional application computing capacity with no need to purchase a new piece of hardware. With virtualization, the devices themselves become utilized at a much higher rate than previously seen. The bad news? With an increasingly virtualized data center, your success is even more susceptible to problems that arise from the lack of visibility of physical devices, and the complexity of power and cooling load fluctuations.

Put on your structural engineering “hard hat” and ask yourself this about your current data center: is it built on a solid, scalable, well-understood and well managed hardware foundation, or is it instead vulnerable, resting precariously on the assumption that there is enough physical capacity to handle whatever loads are placed at any point? Remember that historically when data center services failed, most users still had the ability to continue the majority of their work since their local devices contained significant local computing capabilities. In the traditional computing model of years past, data center failures were inconvenient, but not catastrophic. With the new paradigm of always-on connected portal access to backend computing services, data center failures stop business. Read on to understand more of what can be done to assure backend services continues to be available.

Today’s Data Center Challenge
Let’s consider what’s now occurring within the data center environment. As we know, the demands upon business applications and their information access are growing dramatically, while the actual floor-space available for computing is not. At the same time the economics of computing are driving the need to reduce all costs. As a result, data centers are being updated with much higher capacity and higher density equipment. As more servers are “crammed” into a rack, each rack draws more power and thus generates more heat, which requires more cooling (and even more power) per square foot. Virtualization layers on top of this highly dense structure. In roughly half of the data centers today, virtualization is being used to increase the utilization of this dense hardware to levels that are unprecedented in the past. All of this is driving the need for a well-managed and actively planned Data Center infrastructure. Devices need to be placed in service quickly, maintained accurately, and then decommissioned when their value declines. It’s really about lifecycle management. There is simply no room for low performance or aging equipment in this new high-density structure.

Are you managing the lifecycle of your data center asset devices, or are they sitting ghost-like, taking up precious space and power? How accurately are you able to plan and forecast your data center’s capacity? Are you executing fiscal asset planning which takes into account capital depreciation cycles and the resulting opportunity for technology refreshes? Do you have repeatable processes and operations to consistently execute all of the above? Do you know how much any compute transaction costs your business today and tomorrow? In the abstracted data center where failure can paralyze business, these questions demand your attention and every Datacenter manager and operator needs to consider whether their core foundations are ready for the transformations now underway.


The Challenges for SDDCs

The challenge for the owners and operators of Software-Defined Datacenters is that in today’s world, resources are finite. Long gone are the days where structures were over built, oversized, over provisioned and over cooled. In that world, data center capacity was a discussion about the active devices that were to be chosen. The underlying structure, since it was over built, was essentially infinite in nature. Enough headroom existed that new applications and new requirements would never come close to consuming all of the space, power and cooling available.

In the SDDC, abstractions exist across the board which allows work to be moved or migrated from place to place in real-time. Instances of servers can be started or moved dynamically. While this dynamic capability sounds good at first, the consumption of resources underneath also changes, and it is this very set of resources that are now no longer infinite in nature. It is quite conceivable that the movement of workloads in a datacenter could trigger catastrophic failures associated with power and/or cooling overloads.
As abstraction takes hold, the need for active management of the physical layer grows. In short, the adoption of SDDC technologies requires the deployment of DCIM to assure physical, logical and virtual layers are coordinated.

A Must Have for the Software Defined Data Center: DCIM
The solution for managing the foundation of your data center business is Data Center Infrastructure Management (DCIM), defined by Gartner as the integration of IT and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. It extends from the physical facilities up to and including the virtualized instances of capacity. DCIM is the purpose-built suite of software intended to provide management of the physical components in a data center, the devices, the resources, etc. DCIM provides this lifecycle management over long periods of time. Today’s DCIM suites can be tightly integrated with the other management fabrics that are in place. Most importantly, modern DCIM suites are strategic extensions to the corporate IT management fabric.

The current trend of virtualization is here to stay. When virtualization spans across the servers, networks and storage components it creates a Software Defined Data Center, and the SDDC must be built upon a solid foundation of actively managed resources. DCIM suites are the means to actively manage the physical aspects of the data center in the context of virtualization, assuring that the dynamic changes associated with abstraction are planned for. Regardless of the architecture deployed for virtualized servers, storage and networking, it is the physical infrastructure that supports your business and without it, your business stops.


September 20, 2013  5:42 PM

We’re in Denial: Working on Mobile Doesn’t Work (Quite Yet)



Posted by: Workspot
BYOD, CIO, Cloud Computing, Consumerization

By Ty Wang – Co-Founder and Vice President for Workspot

As consumers, we’re melded to our mobile devices. The gadgets themselves and, of course, the apps are part of our personal identities. As workers, we dream of the same level of personalization — and we’re often willing to pay for it by using our own devices for work. Employers have wholeheartedly embraced the always-on employee and the supposed productivity gains to be derived from letting people use their mobile weapon of choice.

But there’s a problem: Working on a mobile doesn’t always work. Let’s face it; using SharePoint on mobile is a soul-crushing experience today. People in enterprises today struggle to work on their personal tablets because the vast majority of companies have applications and data that are secured behind firewalls, on locked-down laptops and within restricted wireless networks.

Billions of dollars and years of time have been invested designing and building this infrastructure to be secure and reliable. Now, consumers want this same infrastructure to be available on what are predominantly consumer devices designed for messaging, media consumption and games.

Struggling to work

Let’s look at the example of Karin, an account manager at a 1,000-person consumer apparel company. What happens when Karin gets an urgent phone call to review documents and approve a transaction? She takes out her company-issued laptop. She fires up her VPN and accesses it by inputting the code from her security token. She then logs into SharePoint and Outlook email. Depending on her laptop and connection speed, this process might take five minutes.

Karin would love to do this work on her personal iPad, the device she always carries in her purse. Compared to the company laptop, it’s slick, speedy and seamless. But it’s not an option, thanks to Justin in IT. There are 600 million people like Karin working within traditional enterprises today.

Now, Justin in IT is not the bad guy. He’s spent countless months integrating systems to provide access without data leakage and millions of dollars to manage identity and risk. And now the CIO is talking about BYOD?

If Justin has to lock down personal devices with mobile device management, or MDM, it defeats the whole purpose of BYOD. Karin will hate it, because Justin will know everything she does on her personal device. Except for email, every new secured business app will need special permission. And she doesn’t like the idea that Justin will be able to lock or wipe her device should she lose it or leave the company.

For his part, Justin isn’t happy about the prospect of installing new MDM servers in his datacenter and taking on the added overhead of managing mobile devices.

What the people want

Today’s enterprise users want their core business applications to be as seamless as their consumer apps. They don’t want to be guests on their company’s wireless network; they want to log in once and access all their work applications and secure documents. They don’t want to have to switch between browser tabs for different mobile business applications; they want to stay within a single, mobile workspace both applications and documents. They don’t want broken user experiences when trying to access behind the  firewall documents from SaaS and other applications.

The ideal BYOD scenario would also create a clear wall between personal apps and content, and information related to work applications and documents. That way, if Karin’s iPad ever is compromised, Justin can wipe company data and leave her vacation photos intact.

This ideal BYOD scenario would also leverage all that great work Justin and the IT team has done to integrate backend systems into business workflows and then make them accessible through web applications via VPN and authentication systems.

Enterprises don’t need still another, parallel system for delivering applications to brought-in mobile devices. In fact, most already have what they need to deliver applications and data effectively from behind the firewall. And here at Workspot, we happen to believe that this isn’t a distant future, but something that can happen today.

Getting rid of blind spots

Let’s assume that Karin and Justin can agree on one place for work with the rest of the mobile remaining personal. One of the lingering issues for deploying BYOD is the lack of context and visibility of activity on the device itself. When Karin says, “My SharePoint is running slow on my iPad,” and so what does Justin do? He goes to his application server and network monitoring systems and verifies that no outages occurred. This still does not address the fact that Karin, and perhaps many of her peers, are  experiencing slowness of apps because Justin simply does not have the actual user experience data on what is happening on each individual mobile device to make the adjustments to improve Karin’s and others user experience that leads to increased productivity.


August 28, 2013  5:32 PM

Does ECM Still Add Value to your ERP Environment?



Posted by: Dr. Werner Hopf
CIO, IT assets, SAP, Uncategorized

SAP has been at the forefront of ERP providers with its vision for supporting the “real-time enterprise.” For the last several years, the company has consistently been introducing technologies that are steering business strategy in this direction. Data, obviously, is at the core of this vision. More specifically, SAP’s vision is predicated upon helping businesses store, organize and leverage all this data in ways that dramatically enhance understanding, engagement and responsiveness to strategic objectives.

However, the ever-accelerating speed of business and the quest for highly granular data analysis coupled with explosive data growth has created significant performance bottlenecks in the ways that corporations manage and retain documents and data across their extended enterprises.

These bottlenecks can have crippling effects on organizations using SAP ERP applications, especially if the business is also consolidating data management through a third party Enterprise Content Management (ECM) system.  With data generated at historic rates and real-time query applications demanding instant accessibility, an ECM-centric approach to data management can drag down business performance.

The reason? ECM’s raison d’etre was to provide secure, long-time storage, image capture and document management. But it wasn’t designed for the ever-increasing data volumes being generated today. The infrastructures of such solutions have continued to expand substantially, increasing maintenance overhead – and support costs – with the addition of a wide range of extended components such as web content management, business process management, workflow design tools, social collaboration and digital asset management, to name a few.

In the drive towards “real-time” operational and competitive responsiveness, core ECM functionality such as managing the content lifecycle from creation to destruction, delivering, preserving and storing data are essential. In many instances, traditional ECM systems are not able to keep pace when it comes to process changes and can actually slow operations. Plus there may be latency or synchronization issues between the ECM and ERP systems. In some cases, they have become all-encompassing and are zapping both administrative and budgetary resources.  ECM solutions were developed as generic retention management systems and don’t have any real ability to be fine-tuned to fit customer-specific needs.

It used to be that the more complex a system was the better; today it is simplicity that takes center stage. Companies are now focused on the value of content rather than the technology. And for solutions, they want a short implementation cycle.

Organizations have made a significant investment in ERP offerings and SAP systems, in particular. The ERP has, in essence, become the ‘system of record.’ But now, there are twenty-first century solutions already implemented that take a more efficient approach when it comes to data management designed to fit SAP’s vision.

These low cost and flexible systems enable archiving to meet performance and legal requirements and support a data management strategy. Dolphin’s best practices now include a much lighter weight option. It uses SAP solutions as the application layer and hard disk storage systems for long-term data retention to deliver fast, reliable access to stored content.

Implemented as “stateless” translation software between the SAP solution and long-term storage devices for archived data, data and documents do not have to be maintained on the content management server. Instead, the server maintains only configuration data and optionally cached data of stored documents for performance improvement. Persistent information resides on the storage hardware layer and within the SAP solution. With stateless implementation comes the advantage of existing backup and replication functionality for both the SAP system and storage hardware. Perhaps best of all, this requires no additional configuration.

The result is a content management system that can be implemented faster and administered simply with rapid information retrieval for business purposes, simplified administration and upwards of 95% data compression for economical storage. Another bonus is that implementation can be completed in just a few days.

The vision of “real-time” operational responsiveness is closer than ever before. The challenge is how to achieve it in the face of ever-growing data generation and consumption. The best approach is likely not with software-heavy ECM systems that were constructed to support the data management requirements of the 1990s, but rather, with lightweight, focused components that emphasize both speed and a lower, more attractive total cost of ownership.

What’s your take on the future of this approach? Is it time to sunset ECM?


August 12, 2013  7:34 PM

What Breaking the Law and Cellphones have to do with Data Centers



Posted by: Nlyte Software
Data Center, DCIM

By Matt Bushell

We’ve all heard of Moore’s Law, that every two years the number of transistors on an integrated circuit (and thus processing power) doubles.  Gordon made this observation way back in 1965 and it became popularized in 1970 – so it shouldn’t be news to any of us.  In fact, it is a reality in our daily lives.  Think of a smartphone and your cellular phone contract: every two years you are eligible to get a new one from your carrier, and your carrier is willing to underwrite the bulk of the cost of it. Why? Because the company can make enough money on your contract to cover the hardware cost of the phone.  Think about this for a moment – the services are more valuable than the hardware. Now let’s put that in our back pocket (metaphorically – I wouldn’t recommend putting a phone in a back pocket lest its screen crack.) The other reason is, the cellular carrier knows that a new model will come out in two years – which gives them an opportunity to entice you to stay with them.  A lot of mobile phone innovation has to do with processing power (you could probably land 1000 lunar missions with your iPhone, but don’t quote me on that).

So to recap, technology is advancing, processing power doubles every two years, and service is more valuable than hardware.  Now let’s talk about servers and the data center.  Your data center has a LOT of servers. Thousands of them. You may consider it a badge of honor keeping them all running (you should), but are you getting the message from your cellphone provider, in your case, the IT Finance department, that every three years, they’re ready to replace that server?  Think about that – EVERY THREE YEARS YOU NEED TO REPLACE A SERVER.  A thousand servers, 333 replaced each year. A third of your data center turned over each and every year. “Really?” you may ask. It’s simple:  think of a server’s cost and the power it consumes and the workload it executes. The hardware (and software) costs typically account for only 15 to 25 percent of the overall costs associated with installing, maintaining, upgrading and supporting a dedicated server. So, if you can save HALF of the power a server consumes with it’s newer cousin in three years (Moore’s Law), and the fixed cost is a fraction of its overall life, why wouldn’t you change out that old server (“but it’s only three years old”) with a new one?

Your Finance department has other laws it needs to abide by, specifically Generally Accepted Accounting Principles (GAAP).  GAAP states that capital equipment like servers cannot be fully depreciated in less than three years. There are financial impacts of leaving a server in place past its 3 year schedule: the lost tax benefits or the lease over runs (plus lease cost acceleration beyond the initial term), the cost escalation of service and/or warranty extensions.  Or what about the cost of having abandoned servers that take up space and idle power artificially consuming precious capacity that could be freed up, or worse, may lead you to think you need to expand your data center. Or the costs of having dual sets of gear as the replacement process is slowed by inefficiencies, you pay for lease/warranty/depreciation/service/power – all of it  – all overlaps.

So EVEN IF Finance drove the rest of the organization to tightly manage asset replacements every three years, they’d still be violating ole’ Gordon’s Law already by an entire year.  But at least they’d be doing their best to not be a law breaker.  The question is, are you?

Matt Bushell leads Nlyte’s Product and Corporate Marketing efforts as their Sr. Product and Corporate Marketing Manager. Prior to joining Nlyte, Matt worked at IBM for more than ten years, helping to launch multiple products in their Information Management and SMB groups.

1 http://www.webopedia.com/DidYouKnow/Hardware_Software/Server/cost_of_server_administration_and_maintenance.html


August 7, 2013  3:57 PM

Connecting the Dots in the Data Center – DCIM and ITSM



Posted by: Nlyte Software
Data Center, DCIM, IT assets

by Mark Harris

As a general rule of thumb, the more connected your data center information management (DCIM) solution is to your existing IT management frameworks, the more strategic your DCIM deployment will be. The more connected the solution is, the larger the population of users will be. The larger the population is, the more financial impact DCIM will have in your organization.

DCIM is a fairly new category of IT management solution and is has loosely coupled definitions depending on who you are talking to. As a point of reference, DCIM is defined by the industry analyst firm, Gartner, as the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM will enable a common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.

The analysts’ definitions vary a bit and in general are very broad which has created an environment with vendors popping up daily with their own interpretation.  In general, DCIM tools provide real-time information about the physical assets found in a data center, the resources consumed, their life cycles and the mapping between virtual, logical and physical layers within the IT structure.

In a market such as this, many potential DCIM customers start their research and investigation of available DCIM solutions in a very hands-on tactical mode. DCIM comes in all shapes and sizes, and in fact includes everything from sensors and power monitoring, to full-fledged life cycle management suites. Each potential customer looks at the pieces that appear to have the most relevance to their management goals today. They start their DCIM journey looking for solutions that fit into their existing ways of doing business, new tools to be applied to their old approaches.

Luckily for a few, they are beginning to see the much bigger opportunity and their tactical thinking quickly transitions to attaining much more strategic value, across wider audiences, which is ultimately realized with the deeper integrations into their existing thought-processes, and their rapidly evolving services management structures. The very role of the CIO has change from “All services at ANY cost,” to a more value oriented approach, “The Right level of services at their Value to the company.”

DCIM is Strategic when connected to ITSM

DCIM delivers core value when deployed in a manner as to not be yet another island of features. Many of the DCIM industry’s early adopters started their journey with DCIM tools that were nothing more than enhanced drawing solutions, where visual fidelity reined king. In fact many of the currently available DCIM solutions and even the latest open-sourced OpenDCIM projects are basically enhanced spreadsheets with drawing built-in, which works great for documenting devices, but still misses the BIG opportunity.

Now we are starting to see a fundamental shift in discipline and accountability. Everyone wants to look forward rather than backward, and relatively few folks are trying to protect their previous ways. The ITIL-like approaches (anything that enables discipline and accountability) are becoming much more interesting in this new climate. That’s where DCIM thrives. In this new climate, new DCIM solutions must complement and integrate with existing management apps.

IT Service Management or ITSM is the process-based practices intended to align the delivery of information technology services with the business needs of the enterprise.

DCIM (when done right) forms the critical physical extension to ITSM. What do I mean by ’Done Right’? It’s when DCIM is deeply integrated with ITSM and becomes part of the critical path for change management. How will you know you’ve been successful? The number of users will grow into the dozens or hundreds, remedial accuracy will increase, and time-frames for labor-intensive operations will shrink, your costs for assets with be right-sized and your capacity planning will begin to include the physical layer itself, including all of the energy-related costs DCIM is not about drawing pretty pictures, its about extending the service management discipline to include the physical layer which has been largely ignored for the past 40 years. DCIM directly supports the transformation, which is already occurring across your IT structure.

Mark Harris is the vice president of marketing and data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy. He also shares musings on the industry at DCIMExpert.  Nlyte Software is the independent provider of data center infrastructure Management (DCIM) solutions.


July 23, 2013  2:27 PM

4 Surprising Reasons You Hate Your Smartphone



Posted by: Renodis
Mobile

Woman angry at smartphone

Wireless Networks are better and bigger than they used to be, but consumer complaints about coverage have not tapered off.   Mobile device technology is advancing at a breakneck pace, yet we complain about our smartphones.  Why?  The paradoxical nature of wireless consumer sentiment is rooted in the “inconvenience associated with convenience”.  We are forever chasing the moment when we have the latest innovation.  The barriers to achieving this temporary nirvana combined with the fleeting nature of it are the greatest sources of disdain for something that should consistently amaze us.  Here are 4 perhaps surprising reasons you hate your smartphone.

Reason #1 You Hate Your Smartphone: It’s Too Smart, But Not the Smartest Available

The evolution of cell phones during the last decade has been nothing short of amazing.  Advances in technology including the rise of the smartphone and mobile applications have occurred at an unprecedented pace for an industry.  Many people can’t even keep up with the capabilities of their own smartphone.  More and more I see that it doesn’t take long for our expectations to exceed what we’re able to get out of these devices.  We are accustomed and have come to expect this rapid evolution of technology.  The latest and greatest smartphone is only the latest and greatest for a month or two before there’s something out there that makes it obsolete.  Once we have our device we get two months of being on top of the mobile food chain before being knocked off.

My suggestion in this area is to wait to upgrade until you understand the landscape and technologies that will soon be on the market.  Check out www.ctia.org for upcoming smartphone technology that will allow you to stay ahead of the game.  AND, Don’t be afraid to talk to every smartphone geek you can find (computer stores, or big box electronics stores are excellent sources of geek knowledge).

Reason #2 You Hate Your Smartphone: I Can’t Get the Latest and Greatest

We can’t upgrade again for two years without paying double or triple the cost of the device for an eligible upgrade.  The reason for this is that the phone cost is heavily subsidized by the wireless carrier.  That contract is a way to protect the carrier’s investment.  This fact is the biggest frustration for mobile consumers.  Locking people in a contract for two years on a device that will be obsolete or at least outdated in a few months hardly seems fair.  So far no one has come up with a good solution.  T-Mobile recently announced the end of contracts on smartphones.  You now have the option to not have a contract on a high end smartphone on their network.  So, in reality this means you’re still paying the unsubsidized cost of the device, but spreading it out over monthly installments.  To me, this seems like you’re buying a phone from rent-a-center.  You’re getting nailed either way for the full amount or worse, but it is spread out and doesn’t hit you all at once.

My suggestion for the consumer who wants to future-proof their phone is to check the value of the phone through the major recycling companies.  The current buy back on a used iPhone 5 on any carrier is about $340 in excellent condition through www.E-cycle.com or www.Gazelle.com.  You could conceivably upgrade to an iPhone 5 for $199.99 on a two year contract, keep it for a few months (in good condition) and upgrade to another smartphone at full retail (around $650).  You would have the new phone for 3 months, then upgrade to another device for $310 when factoring in the credit on the iPhone from the recycler.   It’s one way to take the sting out of upgrading at full retail when you have not reached your upgrade subsidy date.

Reason #3 You Hate Your Smartphone: My Carrier is the Worst

Several factors may contribute to someone’s allegiance to one carrier vs. another.  When the market became saturated the carriers ran out of new customers.  It became harder and harder for carriers to come by new customers without stealing customers from each other.  To steal customers, the carriers had to focus harder on the strengths of their networks and the weaknesses of their competitors.  First this approach happened with voice and today it’s happening with high speed data.  We are inundated with commercials touting who has the highest population area covered or who has the largest, most reliable data footprint.  With all of this information floating around in our head, we add our own experiences to the mix and usually pick a side.  We decide to favor one carrier over another and hold contempt for at least one competing carrier.  You will also see this kind of brand loyalty in other industries (like the auto industry) but in wireless it is approaching the divisiveness of politics in an election year.  Though largely uninformed, and without scientific method we arrive at the conclusion that carrier x is terrible, and carrier y is the best.  If you happen to have service with the former, you have decided your carrier is the worst.

My suggestion is to combine personal experience with third party knowledge like www.ConsumerReports.org, www.RootMetrics.com, or www.PCWorld.com to help gain a better understanding of your preferred carrier.

Reason #4 You Hate Your Smartphone: It’s Too Expensive to Repair/Insure

Insurance options can be very expensive.  They are either worth it or a complete waste depending on your behavior, device replacement or repair options, and associated costs.

My suggestion in the Enterprise Account world is to negotiate upgrade eligibility waivers/early termination fee waivers during a contract renewal.  These are valuable tools if you need to replace a damaged device in a pinch.  If you are an individual liable subscriber, I suggest a rugged case and a third party repair option like www.Gophermods.com.  Insurance is still worth it to some individuals whose potential loss in business outweighs the annual cost of insurance.

About the Author
Brian Dykhuizen is the Mobility Manager at Renodis and has over 12 years of experience advising clients in all areas of mobile suppor
t.


July 1, 2013  11:45 AM

4 Examples of IT Leaders Being Relevant to the Business



Posted by: Renodis
CIO

Handshake

In a recent blog post I talked about the 6 Ways IT Leaders Can Be Relevant to the Business.

  1. Sit at the Executive table
  2. Partner with leaders of the business
  3. Act like a problem solver and solution provider, not an order taker
  4. Outsource where necessary, delegate when necessary
  5. Run IT like a business
  6. Hire the right talent and hold them accountable

But now let’s take a dive into a few examples of just how IT leaders are driving results and becoming relevant to the business.

Example #1

Retail company ABC wants to improve the customer experience, improve customer data gathering, and increase supply chain efficiencies. The IT organization proposed using multichannel commerce (eCommerce, POS, Supply Chain, CRM) to help increase sales, make it easy for their customers to order their products online, and reduce customer wait times when visiting their retail locations.

Example #2

Financial company ABC needed to reduce downtime of network for voice and internet communication, and also needed to improve customer data security. The IT organization proposed upgrades to their network bandwidth to improve performance, increased security, implementing a disaster recovery plan to protect data, and web based applications for use at branch locations to support customer data.

Example #3

Company ABC in the manufacturing industry needed to improve internal collaboration and communication, increase employee productivity, and improve internal employee knowledge of products and services. The IT organization proposed and implemented collaboration tools and web based platforms to improve customer relationship management and internal communication/productivity between employees, customers, and partners.

Example #4

Company ABC in the distribution industry wanted to increase field sales, improve the customer experience, and improve productivity. The IT organization proposed using mobility and tablets in the field to deliver web based training and web based platforms for order entry, resulting in improved employee performance, increased efficiencies for order entry, and an improved customer experience.

Bringing It All Together

Supply chain, inventory processing, collaboration, web based training, GPS, mobility, improving bandwidth to improve efficiencies and network performance… Whatever your business does, systems and technology can help drive results for the business and improve the experience for customers and internal end users. But all of these things need to be tied to the overall future strategy of the business, not just a way to use technology for the sake of using technology.

This is the job of IT to work with Executive teams and stakeholders to understand their strategies, goals, desired outcomes, key challenges, and how IT can help them achieve these results.

Ryan Carter is an experienced industry expert in Telecom Management and Enterprise Account Manager at Renodis.


Page 1 of 3123

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: