I can build multiple use cases to depict the importance of Modern Requirements Management Tool – build on and for Microsoft Team Foundation Server. But before talking about Modern Requirements4TFS, let us understand what the biggest threat to the success of a project is. It is the business or user requirements. A minor ambiguity in requirements finalized at the initial stage of a project can create a major threat to the project at a later stage. PMI (Project Management Institute) research says there are only 3 things that can kill a project. Those are People, Process, and Communication. There are 7 factors that lie in these 3 segments. Lack in any of the 7 factors can significantly cause a delay or failure. PMI says, “Provide the project team members the tools and techniques they need to produce consistently successful projects.”
That shows the importance of a good tool like Modern Requirements4TFS in the success of a project. A tool that covers people, process, and communication to get a strong hold on the project throughout its lifecycle. “Most project problems are caused by poor planning,” says CompTIA. And mostly in all those cases, it is the requirements that play havoc. The orthodox requirements management tools have failed drastically in capturing modern day’s dynamic requirements in a crisper manner. Requirements captured only in text leave a lot of scope of loopholes and ambiguities. That is why you need a tool that captures requirements with visual contexts and use cases. This not only helps in capturing the requirements clearly and unambiguously but also helps in setting a faster pace of development of the project. Project Insight states the top four causes of project failures are:
Team Foundation Server from Microsoft is popularly known as TFS. To use it to its full potential and to draw out the maximum benefits for requirements management, you need an intelligent tool. Modern Requirements Management Tool is one of those. It is uniquely integrated into Microsoft Team Foundation Server and VSTS. Modern Requirements4TFS gains you access to a series of new hubs and features. It is a web-based tool having multiple modules. These modules empower you to have a full control of requirements management. With its help, you can define and manage quite easily and run the show efficiently. It can integrate with Microsoft Team Foundation Server and Visual Studio Team Services thus giving you a great level of control on Microsoft platform. The complete setup can be arranged as per your requirement. It can reside on-premise in case you have limitations on the cloud. Else it can reside in the cloud.
Most of the organizations building for Microsoft platforms use Modern Requirements4TFS for Requirement Management. In fact, Modern Requirements Management Tool – build on and for Microsoft Team Foundation Server is the most appropriate tool in such an environment because all requirements are stored natively as work items in TFS (Team Foundation Server) / VSTS (Visual Studio Team Services). Many organizations also use TFS for tracking of development work elements, testing, and release management. In the nutshell, Modern Requirements users get following features and functionalities:
The success of any project lies on three key factors These are Open Communication, Effective Collaboration, and the use of Intelligent Systems. It is a tool like Modern Requirements4TFS that enables these three factors to help any organization in achieving effective results in project management and team collaboration. Sharing your plans visually acts as a catalyst in the process of presentation and understanding. Thus, it reduces the risks drastically. On top of it, the tool improves scope management, change management, and project quality.
DH2i, a leading provider of multi-platform high availability/disaster recovery, just released Version 17.5 of its DxEnterprise software. To gain a better understanding of this software and what it offers to Enterprise IT users, I recently sat down with Don Boxley, the co-founder, and CEO of DH2i (www.dh2i.com).
What key issues are you addressing with the v17.5 release of DxEnterprise?
Boxley: At the core, the issue comes down to how enterprises can achieve digital transformation: IT teams are under pressure to do more with limited resources. Businesses and other enterprises are increasingly dependent on data, so they require high availability – as little as 10 minutes of downtime can be disastrous. Finally, with sprawling infrastructures, IT resources are strained.
To overcome these issues, enterprises need to think strategically about integrating legacy infrastructure. For the majority of today’s enterprises, the primary obstacle is with the legacy infrastructure. It is expensive to maintain, both financially and in terms of required labor. This is the most pressing issue facing today’s enterprises.
How might v17.5 help with the legacy infrastructure constraint enterprises face?
Boxley: DxEnterprise, and more specifically, version 17.5, alleviates the legacy infrastructure issue with an application-based approach. It supports an industry-first unified Windows/Linux automatic failover and fault detection. The company initially had a Windows focus. This new release builds on earlier DxE versions, allowing management and servicing of Linux, with automatic Windows/Linux failover for SQL Server 2017. It features a single-console Windows/Linux management. SQL Server 2017+ users benefit from this multi-platform environment as it allows them to move workloads and data to and from any cloud. They can also use it to scale cloud-based data analytics and business intelligence.
Another key component of DxE v17.5 is it enables users to create a new class of distributed frameworks which allow workloads to move to the best execution venue, based on computational and budgetary considerations – we call this Smart Availability. This often means fewer operating system environments are required and reduces time spent on system maintenance. Ultimately, it frees IT, professionals, to spend their time on higher-yield activities that impact the bottom line.
You talk about Smart Availability, as opposed to high availability. Can you describe the difference and what it means for the IT user?
Boxley: High availability refers to overall uptime, while Smart Availability is an evolved, strategic approach in the direction of that same general goal. Smart Availability decouples databases, Availability Groups, and containers from the underlying infrastructure, and hence allows workloads to move to their best execution venue. High availability alone is often counterproductive: it simply adds to the infrastructural complexity without regard to the overall objectives. Smart Availability instead adapts to the overall business objectives and constraints.
Are there any other applications you see being created by this new release?
Boxley: The single cross-platform service, with its built-in HA capabilities, will be useful to managed service and public cloud providers. They’ll be able to increase recurring income by offering this service to customer applications – previously we saw many of these providers leaving it to the customer to ensure high availability. With this release, providers can include high availability as a service.
We’ve also included enhanced features such as InstanceMobility for dynamic Smart Analysis workload movement, and intelligent health and QoS performance monitoring. These help ensure DxE v17.5 cuts costs, simplifies IT administration, and frees the IT team up to do the most impactful work in the enterprise.
Performance Monitoring has become a critical factor for all business applications running in an enterprise. There are various reasons for it. Firstly, no application functions in isolation. There is always a dependency either backward or forward. It is either pushing data to another application. Or is pulling data from another application. As a matter of fact, infrastructure is not away from the scan of performance monitoring. Everything has to be in sync. Because it is the overall performance that matters in the organization. So, even if your infrastructure is modern and state-of-the-art performing at a rocket speed it loses its value if applications residing on it are under-performing. Let us have a look at the rising trends in performance monitoring for both – application as well as infrastructure. The key contributors to it are Big Data, Machine Learning, IIoT, etc. SaaS delivery model plays a significant role in this.
Overall, industrial architecture is not as simple as it was a few years back. This is the age of complex applications. Features like Containerization, Microservices, and Heterogenous clouds to tackle data overloads are becoming critical and important. Data, in fact, is flooding from all directions. It is very important to analyze it. There are various ways to adapt to a proper performance monitoring mechanism. It is necessary to learn about each. These are Code-level APM (Application Performance Monitoring), Network Performance Monitoring or NPM, Performance Testing, Real User Monitoring or RUM, and Synthetic Monitoring. Code-level APM is a good tool to report load time and response time. In fact, it smartly figures out the lines of code in the application that are causing these troubles. New technologies obviously require new approaches. For instance, technologies like containerization and microservices need tracking of the tremendous amount of data to ascertain performance.
Performance Monitoring Needs Better Tools
Looking at the complexities of performance monitoring, vendors offering APM and similar services include machine learning methodologies to attain optimum results in data mining and generate important information. After all, it is the performance that matters most in an organization. And it becomes the responsibility of the IT department to ensure the performance of any employee in the organization doesn’t get any negative impact due to the poor performance of an application or infrastructure in place. Usually, it is somebody on the top in the technology department of an organization to own this responsibility. In fact, this is the person who is answerable for any kind of issues in the performance.
An increase in the applicability of Artificial Intelligence (AI) in real life is responsible for the development of Chatbots. In fact, it has reached a maturity level where it is interestingly able to engage a prospective customer quite significantly. There is a steep growth estimation in the Chatbots industry from 2015 to 2024. The valuation as a report from Transparency Market Research is US$100 million. It is about to touch US$1000 million in 2024.That is a phenomenal jump in all respects. On a similar note, a PWC research paper states AI will be contributing US$20 trillion to the world economy by the year 2030. That signifies a tremendous potential of this market in coming years. The drastic drop in data rates is one of the key contributors to this phenomenal growth. Alibaba Group is investing a large pool of their profits in R&D, especially AI.
Note that Alibaba is having a sales growth at the rate of 20% currently amounting to US$100 million. These figures are from a report by Market Realist. In fact, such a stupendous growth in Chatbots industry is changing the whole paradigm of the business model. New avenues are coming up in infrastructure, cost-saving alternatives, and business transactions. As a matter of fact, Artificial Intelligence (AI) is contributing significantly to business development in almost all industry segments. And, in fact, that is getting quite positive results in terms of business growth and development. It is turning out to be a highly beneficial proposition. Rather, Chatbots are becoming a streamline companion of the operative system. It is, therefore, important to understand how you can augment market size through Chatbot. If you are able to do that you can easily minimize operating cost and maximize productivity.
Chatbots Industry Sees Tremendous Growth in AI
As a matter of fact, Chatbots are a clear example of a new form of collaboration between manpower and machine. That means if you are able to harness the power of artificial intelligence in business automation, especially Big Data, it can increase your operational efficiency manifold.
What is a digital transformation? Different enterprises define it in their own way. The extent of going wrong is directly proportional to the defect in understanding it rightly. The more you are away from the right definition, the more are the chances to go wrong. Basically, it is how you integrate digital technology into all areas of your business. The transformation will lead to a drastic change in your way of functioning. It will, in fact, change how you operate. In addition, it will also change the manner in which you deliver value to customers. That means when it happens at your end, your customers will also experience a huge amount of transformation in the way they are doing business with you. It demands a big cultural change within the organization. In fact, not within the organization, but also around effecting all stakeholders in one way or the other.
Digital transformation involves digital technology extensively. In fact, there will be a large amount of mobility entering into your day-to-day functionality. It also demands a continuous change involving a lot of experimentation, failures, and successes. The best way to get the best out of it is to keep challenging your status quo. The more you challenge, the more you get ideas to improvise it. Digital transformation is important for all kind of businesses. Also, for all sizes of businesses. So, whether you are a small business or a large enterprise, the importance remains the same. It is important to adopt in order to stay competitive in the market. If you don’t take appropriate steps in this direction, your competitors will leave you behind in no time. It also keeps your relevance intact. But it is just not merely moving to cloud as a lot of business leaders think.
Digital Technology Is A Lot More Than Moving To Cloud
It is important for an enterprise or a small business to understand digital technology and digital transformation correctly. What specific steps do they need to take? What changes in the job profiles might happen? Rather, what new jobs come into existence? In fact, what could be the right framework to start with? Do you need a consultant to start? What changes in business strategies will happen? And most importantly, what is the real worth of it? What are you gaining out of it? All these things are very important to understand.
“Digital technologies continue to transform the work, how we interact with colleagues, and the value we deliver to clients and customers,” says Asoke Laha, CEO, InterraIT at the event ‘The Future of Digital Transformation’ at their Noida office. “This means all decision making is data-driven, and leadership must focus on providing insights into marketing and customer engagements,” he concludes.
There are few things to notice about data breaches. Enterprises are preferring cloud over on-premise for less critical applications. That means information security trends are changing noticeably. But more important is to understand is Cloud driving shift in security spending. In fact, is that shift upward or downward. Studies reveal security budgets are rising consistently across the globe. A portion of credit should go to the grand publicity to security breaches. Especially breaches like Spectre and Meltdown. These are actually unanticipated risks that could take an extra bite than the IT budget you keep for security. In the nutshell, security has become one of the top two budget components. The first one being the cloud. As a matter of fact, it might take the top slot in time to come. Despite all constraints, security budgets are moving up. And that trend is visible in all size of companies.
Even though Cloud service providers maintain their own security controls internally or through third parties but still it is a topmost concern of businesses. Rather few enterprises depend solely on their cloud service providers to raise an alarm on data breaches. According to a report on an average, almost 20% of IT budget is allocated to Information Security by most of the organizations in 2018. Around 5% of organizations say their spending on information security will be less in 2018 than earlier years. But that is negligible. That means more than 95% of businesses are spending more on information security than the previous year. Organizations are spending more on application security than hardware and network security. The security spending trends are changing drastically. This shows a substantial impact of cloud on these spendings. Testing and Performance are becoming two major thrust areas in cloud environments for identification of data breaches.
Data Breaches Are One Of The Biggest Threats
Such proactive approaches to testing will decrease their reliance on cloud vendors. Because they would get automatic alerts before their cloud vendors notify them of data breaches.
Io-Tahoe has just announced the General Availability (GA) release of its smart data discovery platform. I sat down with Oksana Sokolovsky, CEO of Io-Tahoe, to better understand the data challenges facing modern enterprises, and how Io-Tahoe is attempting to address them.
You’ve just announced the GA launch of the Io-Tahoe platform. What challenges are you hoping to address with it?
Sokolovsky: I founded Rokitt Astra (Io-Tahoe) in 2014, together with Rohit Mahajan, our CTO and CPO, with the goal of providing the go-to platform for data discovery. The modern digital enterprise faces a complex set of challenges in maximizing the business value of data. For one, enterprises struggle with how to integrate a growing number of disparate platforms, with a formidable volume of data stored across databases, data lakes, and other silos. This makes it difficult or impossible for organizations to comprehensively govern, and ultimately utilize enterprise data.
How does Io-Tahoe address these challenges?
Sokolovsky: We built Io-Tahoe with the goal of providing a fundamental building block for all data discovery. This vision entails making data available to everyone inside the organization and automatically weaving through the data relationship maze to provide actionable insights to the end user.
The platform is built on a machine learning base. It uses machine learning to identify data relationships, including within both metadata and the data itself. It operates in a “platform agnostic” manner and allows organizations to uncover data resources across diverse technologies.
The platform enables a variety of disciplines in the data field – from analytics to governance, management, and beyond. It also commits to a leverageable view of data – data insights should be available to everyone in an organization. This is made possible through an easy-to-use interface, built on a scalable architecture.
We’ve also included a new Data Catalog that allows organizations to compile or enhance data information so it can be leveraged across the organization.
What do you see the future has in store for data discovery?
Sokolovsky: In two words: dramatic growth. A recent report from MarketsandMarkets (1), for instance, predicts data discovery market expansion from $4.66 billion, its 2016 estimated size, to $10.66 billion by 2021. This represents a year-on-year compound growth rate of nearly 20 percent. Most of this growth will be in Europe and North America, with retail services, financial services, and utilities as three of the largest opportunities.
The primary foundation of this demand is the increasing need for data-driven decision processes, but other factors are also playing a role in driving this explosion. A few other factors we’ve identified include regulatory pressures, such as GDPR, the rise of intelligent technology, which utilizes predictive analytics in a smart computing functionality, the shortage of qualified data scientists, the explosion of available data, increased demand for understanding it, the monetization of data assets, and unification of data platforms and management.
It sounds like it’s perfect timing for your release of the Io-Tahoe platform. Can you explain why this launch is so exciting for the end users?
Sokolovsky: I’ll be glad to. The GA launch of our data discovery platform is opening our unique algorithmic product to all enterprises. The machine learning aspect will allow them to auto-discover patterns and relationships in their data, and the Data Catalog promises to guide data owners and stewards through business rules and data policy governance.
For example, it can automatically uncover data across the entire enterprise in a matter of minutes, rather than weeks. This reduces labor costs and allows organizations to tap into potentially valuable data.
It also offers self-service features, empowering the end users to engage and share data knowledge. The Data Catalog feature, in particular, enables users to govern data across heterogeneous enterprise technologies, comply with regulations such as GDPR, and automate the previously manual process of data discovery. This will increase efficiency and use of enterprise resources.
How about a use case – can you give us a clearer picture of what Io-Tahoe looks like in practice?
Sokolovsky: Sure – we’ve actually developed three representative use cases to illustrate how customers could use Io-Tahoe. First, the systems use case: the platform can help them understand data lake and database migration. It can also help with system migration, modernization, as well as M&A system integration/divestiture. Second, the data analytics use case: this comprises analytics improvement, increased revenue potential, and improvement of complementary products. Third, the regulatory use case. The Io-Tahoe platform can assist with data governance, as well as regulatory compliance.
Has Io-Tahoe already seen an application?
Sokolovsky: It has. We have multiple successful examples to share with you. First, a customer used Io-Tahoe’s platform for data discovery and impact analysis as part of its re-platforming efforts. The customer’s analysis time was reduced three times, and cost decreased by 80 percent, with dependencies well-managed and accounted for.
A major investment bank used Io-Tahoe for data asset discovery and appointed a new Chief Data Officer (CDO) to manage data assets. The organization reported similarly positive results, with the data discovery process becoming automated, reliable, and less labor intensive. This freed staff, including the CDO, to focus on analytics.
It sounds like it’s the perfect timing for Io-Tahoe. Do you have any last words or thoughts to share?
Sokolovsky: I want to emphasize, we’re excited about the opportunity to use our technology to address growing, real-world challenges with data discovery. Few of our competitors are addressing these issues. Enterprises require effective and comprehensive access to their data, regardless of where it’s stored. They require data governance, and compliance with regulations, along with a deeper view and understanding of data and data relationships. Hence, we believe Io-Tahoe may soon be a priority purchase for every CDO.
(1) Data Discovery Market by Type (Software and Service), Service (Professional and Managed), Application (Risk Management, Sales & Marketing Optimization, and Cost Optimization), Deployment, Organization Size, Vertical, and Region – Global Forecast to 2021. marketsandmarkets.com. January 2017.
It was quite an insightful day meeting industry leaders, academicians, and students together. The event was at InterraIT. In fact, it was an in-house event but the point of discussion was no less than a topic of global relevance. The topic was – Doing business in India. We had Dr. AD Amar, Professor, USA Seton Hall University. He had come along with a team of young, energetic, and business enthusiast students of his University. Industry expert and veteran Asoke K Laha, President and CEO, Interra Information Technologies, was the key person to guide the students about all business tactics in India and the US. He is the person who has complete knowledge of both worlds. He has offices and operations in both the countries. That is what makes him a perfect choice to guide these budding entrepreneurs hailing from various countries and studying in USA Seton Hall University.
It was not only the students from the USA Seton Hall University eager to learn about Doing Business in India. We had a group of students from IIF (Indian Institute of Finance) along with their professor. The professor is also an active member of ASSOCHAM (The Associated Chambers of Commerce and Industry of India). Thus he was carrying a bountiful of information about the practicalities of opening and closing a firm in India. The opening of a company is now quite an easy process in the country. It takes hardly few minutes, in fact, to open a new company. And the whole process is online. The only hiccup that may haunt an entrepreneur is the process of closing a company in India. It still takes a number of years and involvement of High Court of India for the closure formalities. That is still a grey area.
Doing Business In India Has Become Easier
The overall motive of any country should be to facilitate young entrepreneurs from other countries to do business in India. As Mr. Laha says, learning about culture and people is the utmost thing for this.
Digital Healthcare is not a dream now. It is happening and evolving across the globe. Though the speed of evolution may vary from country to country. But every country acknowledges its potential and hence speedy adoption. It can not only help in creating a lean ecosystem but also an effective one. The manner in which industrial technology is advancing with the internet as its core backbone, pharmaceutical, and medical technology can achieve an astonishing amount of achievement. In fact, it is leading to a system that will have many benefits. Like, less labor intensive architecture will be a major thrust out of it. Also, the whole mechanism promises to be cost-effective. It will be an overall lean operation model for health institutions across the globe. The healthcare market is about to touch US$130 across the globe. By adopting digital technology, it is bound to create a new paradigm.
If you belong to this field, you must attend Digital Healthcare Conference that is happening in May 2018 in Bangkok. Health and pharmaceutical sector with the help of digital movement can definitely do wonders. It can happen with the help of cloud technologies, IoT (Internet of Things), AI (Artificial Intelligence), Big Data, VR (Virtual Reality), mobility, and automation. If it is the right adoption of technology, it can optimize its precision, efficiency, and speed to an unbeatable level. Recently, there was a survey by Microsoft in this regard. They call it Microsoft Asia Digital Transformation Survey. It clearly states the importance of medical technology in everybody’s life. It states that more than 75% leaders belonging to healthcare segment in the Asia Pacific understand the importance and gravity of transformation into a digital business which is going to play a key role in future growth.
Digital Healthcare is the Solution to many issues
A modern-day wellness provider can’t think of its survival without the adoption of digital healthcare. In fact, there has to be a complete healthcare mechanism that promises to deliver seamless health services. And, as a matter of fact, it can happen only with a proper synchronization of physical, biological, and digital systems. Rather, this is the only way to tackle critical health issues across the globe. Obviously, this needs a proper training process that educates people on the changing trends of medical technology. It is important for end consumers to leverage these latest trends seamlessly. Otherwise, the whole efforts will go waste.
Enterprise cloud design is undergoing a transformation. So is the cloud service provider market. Enterprise is becoming more data hungry. The core target is to enhance their data analysis skills. Businesses are preferring to migrate their workloads to the cloud. Especially, the new businesses in the fray don’t prefer setting up their own on-premise server system. Services like PaaS (Platform as a Service), IaaS (Infrastructure), and CaaS (Cloud hosting as a service) are gaining momentum at a faster pace. Organizations are understanding that a good cloud service provider will have a better performance and security provision. Actually, it is all about a matter of cost. You throw a good cost and get whatever you require. Where businesses fail in the cloud is a different story. Those seeking cheaper solutions gradually land into a compromising condition of their data. Public Cloud or IaaS is the most popular cloud service.
In fact, enterprise cloud transformation is a two-way process. It is a kind of knowledge enhancement at both the ends. There are generalists cloud providers. These are capable of building and operating public cloud platforms at a good scale. Some of the best examples in this class are Google, Microsoft, and AWS. On the other hand, we can now clearly see Specialist Cloud Service Provider market evolving at a faster pace. These specialists help enterprises understand and consume cloud service effectively and intelligently. They, in fact, have an expertise in this field. So, basically, there is a distinct line of separation between public cloud and CSP-owned. The latter would be able to manage complex environments in a better way. And definitely, enterprises will have to think of multi-cloud environments. Enterprises should always demand a delivery of breadth of product and services under a single banner.
Cloud Service Provider Must Offer a Strong Relationship
A smart cloud service provider would deliver a larger bouquet of products and services either with its own in-house capabilities or through partnerships with strong vendors.
With the inclusion of social media data, enterprises have no dearth of data available for data analysis and business intelligence. In fact, it is their internal issues that stop them to do so. There is a high risk of getting behind in the race if actually, the organizations don’t perform it. The fact is that many organizations still don’t value it. They are either unaware of the fact that this is a big thing to strive in business or they don’t know how to run the show. But the fact is that all those who jump into the fray are not doing it rightly. They might have understood the power of it, but are not encashing it properly. In fact, identification of right data is important. Because of any organization has data at multiple locations and in different forms. it could be lying in relational and non-relational databases.
As a matter of fact, it is very important for organizations or enterprises to have awareness of what data they possess and where all it resides. Unless they are clear about it, they can’t use it for immensely useful business outcomes from it with the help of data analysis and business intelligence. In fact, enterprise moving in the right direction are gaining great insights about business, customers, and competitors. Rather that is one of the reasons for cloud platforms gaining preference over on-premise storage. The fact of the matter is that wherever their data lies and in whatever shape, it has to be there at their disposal. In fact, cloud providers are adding a lot of value to attract enterprises. Any cloud provider having a better data management capabilities are high in demand against the generalists. In addition, there are third parties having specialization in this field.
Proper Data Analysis Provides An Extra Power To An Enterprise
Obviously, to ensure proper data analysis, an enterprise has to understand various data fabric functionalities.
Is it the right time for enterprises to start cloud adoption at a better speed? Is it safe to leave the boundaries of the on-premise model and jump on the cloud? We all know that innovation is the key today to perform better and thus outpace your competitors. But any delay in innovations is going to cost. And at times this cost may be heavy. But along with all this, is this the right time to take chances without focusing on commitments? Can you sacrifice latter at any cost? Is it not going to create a high risk to the business? It says mass customization is the trend these days. So is flexible pricing. You may have to leave regular trends to achieve your organizational goals. Cloud revolution, in fact, needs a top-level support to thrive and strive. If top-level is lacking confidence, nothing can help an organization.
Cloud adoption promises to deliver higher flexibility and better control to an organization. In fact, it brings in many variations along with different combinations. An organization’s burden, as a matter of fact, decreases. Like, it takes lesser energies to manage cloud provider, hardware vendor, and system integrators. All it needs is to select a right partner. With this, an organization’s running, operational, or recurring cost also stoops down. For instance, you need lesser resources, space, and infrastructure now. It is all a matter of change in culture and mindset to go for it. In fact, the cloud-procurement model is achieving maturity and hence giving higher confidence to the organizations. The best way to measure the level of your cloud adoption is the extent of infrastructure visibility in the organization. The higher is the former, the lower is the latter. Both are, in fact, inversely proportional to each other.
Cloud Adoption Is Gaining Higher Momentum
As I say, it is all a matter of mindset. Otherwise, risks and breaches happen within the four walls of an organization despite all control mechanism.
An Interview with Bruce Talley, CEO, and co-founder of NAKIVO, Inc. (https://www.nakivo.com)
Q: Please tell us about NAKIVO in two words.
BT: NAKIVO was founded in 2012, offering customers a reliable VMware backup and replication solution. Over the years, we have developed into an efficient multinational company headquartered in Sparks (Nevada, USA) with several offices worldwide. We now have a versatile product for VMware, Hyper-V, and AWS EC2 environments, used globally by thousands of companies. Our success is confirmed by 5-star online community reviews and 97.3% customer satisfaction with support.
Q: What are your company’s goals and priorities?
BT: With the amount of critical data circulating in the modern business environment, data protection is the highest priority for everyone. That is exactly what lets NAKIVO take such a strong position in the market. Over the five years of our company’s operation, we have been focused on ensuring the protection of business-critical data for customers with virtualized, cloud-based, and hybrid environments. Our primary goal is to provide everyone with a powerful, affordable, easy-to-use solution that helps ensure the protection and recoverability of business data 24/7. NAKIVO’s 85% YoY revenue growth, as well as our customer base almost doubling in 2017, clearly demonstrates that a growing number of companies understand the necessity to abandon outdated legacy backup software for modern backup solutions, such as NAKIVO Backup & Replication.
Q: These are rather impressive numbers. Could you tell us a little more about your product and its features? What exactly makes you stand out?
BT: NAKIVO Backup & Replication is a high-performance solution for AWS EC2, VMware, and Hyper-V backup that also provides near-instant disaster recovery with VM replicas. The key principle we have built our product around is “the best data protection value for the money”. While NAKIVO Backup & Replication comes at a considerably lower price tag than our competitors’ solutions, the product offers a comprehensive set of features to meet everyone’ needs, standing tall among market leaders. Our solution is fast, scalable, and user-friendly.
NAKIVO Backup & Replication works out of the box. The product can be installed in under 1 minute on Windows or Linux, as well as deployed as a pre-configured Amazon Machine Image or VMware Virtual Appliance. Moreover, if installed on a NAS, our solution can turn the device into an all-in-one VM backup appliance. Currently, we support NAS devices by ASUSTOR, QNAP, Western Digital, and Synology, but in the future, we plan to extend the list by adding other well-known vendors.
NAKIVO Backup & Replication enables you to take full advantage of agentless, image-based, application-aware backup and replication options, decreasing the overall cost of your backup operations by 50% or more.
Incremental backups, LAN-free data transfer, and built-in network acceleration can help you achieve a 2X increase in the backup speed. Automated exclusion of swap data, global deduplication, and backup compression can reduce the size of your VM backups by several times.
Now add the abilities to verify your VM backups with screenshots and instantly recover any VM, file, or application object. As a result, you get a powerful solution which is much faster and less expensive than the alternatives. NAKIVO is truly customer oriented. We act on our customers’ feedback and continue improving NAKIVO Backup & Replication to remain the #1 VM data protection solution on the market.
Q: How many customers use NAKIVO Backup & Replication? Also, can you name some of your key customers?
BT: Over 10,000 companies throughout the world are now using NAKIVO Backup & Replication in their virtual environments. We are extremely proud of the fact that many of them were able to immediately see the advantages of using our product over our competitors’ solutions right after testing our full-featured free trial.
China Airlines is one of our key customers. With two data centers, over 900 VMs, and more than 60 VMware ESXi hosts, their critical data must stay protected at all times in order to ensure the continuity of business-critical operations. By choosing to forego a legacy solution in favor of NAKIVO Backup & Replication, they reduced their weekly backup time by 10 hours, storage space consumption by 60%, and VMware backup budget by 30%.
The Center for Scientific Computation and Mathematical Modeling (CSCAMM) at the University of Maryland has also greatly benefited from switching to NAKIVO Backup & Replication. Faculty, staff, and students required 24/7 access to research materials through their website. They also needed a powerful backup, replication, and recovery solution. With NAKIVO Backup & Replication, their backup processes now take no more than an hour. What’s more, the instant file recovery feature has managed to save them from potential disaster on multiple occasions. We are proud that CSCAMM could reach these results with our help.
These are just several examples. Our customer base includes other major global companies, among them Honda, Coca-Cola, and Microsemi.
Q: What is NAKIVO’s business model?
BT: NAKIVO aims to be completely channel-based. At the moment, we have over 2,400 channel partners and a significant number of distributors in 124 countries worldwide. By partnering up with NAKIVO, they gain large discounts, sales training, deal registration, and regular promotions to drive sales.
Q: And what about your support model?
BT: If you have a problem or any question, you are welcome to contact our Customer Support Center. Also, if you are experiencing some technical difficulties, you can send us a support bundle right from the NAKIVO Backup & Replication web interface. The support bundle contains logs and system information, which is everything our specialists may need to help you resolve the issue as quickly as possible.
Q: What are your plans for the future?
BT: NAKIVO is already one of the fastest-growing companies in the industry, and we don’t intend to settle. We plan to do better and go further. Our plans for the following years are to expand our market presence and focus on large enterprises by adding new highly-demanded features.
Now, you have a personal assistant whom you can call 24×7 for any purpose regarding CRM. Zia Voice comes from Zoho as a superstrong companion to your Zoho CRM. In fact, it is the first conversational artificial intelligence (AI) tool for sales teams. Along with it, you also get a hyper-customization platform for enterprises. This interactive voice and chat tool have intelligent predictive capabilities. Also, it manages your email sentiment in a unique manner. This, is basically, a framework. Over this, enterprises can create their own internal apps for their global teams. It helps in creating a universal and transparent system. For organizations using Zoho CRM, Zia is not a new entity. Only thing is its voice avatar. So, in fact, Zia gets the voice right on its first birthday anniversary. Exactly a year before, on 28th February 2017, Zia came into existence.
Zia Voice is basically a boon for sales executives who are on the move most of the time. In today’s highly competitive environment, nobody can afford to lose time. Even customers today demand a real-time kind of response in most of the cases. So, Zia is now able to proactively announce deal closures. It can also analyze email sentiment in a most intelligent manner. That probably would be a very time-consuming activity if you are doing it manually. Thus, it is definitely going to enhance your efficiency and timely decision-making capabilities. The release comes along with Catalyst which is a capability that empowers your teams to create customized mobile and web apps. These apps can integrate well with Zoho CRM. Also, the teams now can create additional capsules or modules for Zoho CRM for internal use.The whole development resides on existing Zoho framework or infrastructure.
Zia Voice Is a Powerful AI tool for Zoho CRM Users
That means Zia Voice and Catalyst are bound to transform businesses in a unique fashion. It will not only help in deepening focus on business but also enhance overall efficiency. Mani Vembu, COO Zoho Corp says, “We are delivering the first conversational AI for CRM, with Zia Voice. Zia’s enhanced AI capabilities will now help salespeople sell smarter, with contextual assistance and access customer information through a powerful voice and chat interface.” It is basically an interactive voice and chat-bot. It can quickly answer your everyday queries in a real-time environment. These queries could be on new leads, periodic forecasts, traffic status, conversion rate, average deal revenue, and so on. The conversation happens in the form of simple chat and voice messages. Here is a small demo of Zia Voice:
Let us assume that you have a number of legacy mission-critical applications serving your organization for various purposes. Currently, all these are on-premise. That means you have your own data center and servers where these applications and corresponding databases are residing. You are solely responsible for their upkeep and uptime. So, everything is under your control right now. In other words, the complete headache is yours. That includes patch management, upgrades, releases, availability, backups, restorations, security, and infrastructure. Assume the whole setup needs an overhaul. The legacy applications are not serving the business well. It could be any, all, or some of the issues like performance, availability, upkeep, security, operational costs, skilled manpower, or heterogeneous environment. The business needs an overhaul because things are getting out of control. In such conditions, what will be your decision? Before taking a decision, let us see what all possibilities you have.
Now since these are mission-critical applications, a quick decision is important. Because you can’t let the business suffer because of delay in the decision. But, on the other hand, the decision has to be the best one because it is a one-time investment. So, here are the various options you have to explore:
1. The first option is to RETAIN. That means keep the things as it basis. Of course, this is not possible.
2. You can think of MODERNIZATION. That means to retain all the legacy mission-critical applications on-premise. But at the same time explore to move to some modern application or change the infrastructure architecture. This also doesn’t seem to be a good solution.
3. You can think of what they call as Refactor and Shift. That means you will redesign the existing applications using some cloud-native environment. Then you can deploy it in some off-premise cloud system.
Various options for mission-critical applications
4. This option is Repurchase and Shift. In this case, you will replace your current mission-critical applications with Software-as-a-Service model. Or the other option would be to replace these applications with their cloud counterpart. So you would be moving from on-premise to off-premise cloud environment.
5. The last option is Lift and Shift. Ideally, this is the best option. You need to explore it in most feasible manner. In this model, you would be migrating existing on-premise mission-critical applications to off-premise or cloud environment. But in this case, there would be minimal changes in the code or logic.
Now, let me know if you have been in this kind of situation regarding your mission-critical applications ever and which model did you choose? Otherwise, imagine you are the one who has to take a call. In that case, which option will you go for?
I think no learning has come to enterprise from the famous 2017 Equifax security breach. The reason was an easy penetration through an unpatched application. 70% of enterprise worldwide are still living with this vulnerability. Merely banking on IT staff will never resolve this issue. There has to be an alert mechanism that works without fail, timely, and rather proactively. Definitely, you are safe as long as you are not the target. So, you can live happily with a safety tag on your job even if you are not deploying patches instantly on their release. Honestly, very few CIOs/CTOs would be able to answer how many of their applications are in acting as an open invitation to hackers. Because there are few of the applications functioning without latest patches. Of course, the more patches you skip, the more vulnerable your organization becomes. It just needs a hole in the balloon.
Do you have a vulnerability disclosure mechanism in your organization? Mostly it is IT department that decides new application deployments. Like, What, Where, and How. But there are a few organizations where it is a mutual consent between business and IT for this. Interestingly, there are very few where the line of business takes a decision on this. What about your organization? Who is responsible for a security breach? IT? Business? Or nobody? I think, in next couple of years there will be a big transformation in most of the organizations. It will be non-IT functions having more influence on an organization’s applications and workloads management. I am sure, most of you might not agree with this statement. But those who agree, will have to think and answer which specific non-IT business function will it be? It could be HR, R&D, Operations, Finance, Legal, Sales, Marketing, or Executive Management.
Security Breach is a top concern for most businesses
The security breach is, of course, a matter of serious concern today for most of the organizations across the globe.
The automobile industry is transforming. And it is transforming fast. Changing technology trends in the automobile industry are quite visible. On one hand, it is a pressure of saving fuel. Hence, you find automobile manufacturing companies paying more attention to fuel efficiency, better performance, higher mileage, etc. On the other hand, you find higher penetration of technology to gain these factors. Not only that, with a higher focus on customer requirements, the number of gadgets are also increasing in that respect. For instance, Bluetooth has become a basic necessity in a vehicle. There are voice command systems in the vehicles to speak commands. A commuter might use it for switching on the stereo, make a call, start global positioning system to find the most feasible route, and so on. Then, you can easily find an in-dash navigation system in almost all mid to higher segment automobiles.
What is making all this to happen at such a fast pace? Firstly, it is the internet that has become a source of revolution across the globe. The geographic barriers are over and best technologies are available easily on the internet. Secondly, because of the internet, the consumer has become quite smart. Thirdly, it is the millennials that have digital and technology flowing in their blood right since childhood. So, basically, it is a pull from a customer that is driving industries to follow the changing technology trends or leave the game and let the best players carry on. Though Head-Up Display or HUD is not that popular in India, it will be sooner or later. Like, in-dash touchscreen display is now easily available in higher segment cars in India. Similarly, the trend of automatic safety functions is increasing. So is about automation in driving.
Changing Technology Trends Are Changing Fast
Overall, it is about the customer-centric approach. Technology has no limits. It is all how you innovate or leverage the changing technology trends.
What takes Infosys to become Forrester Wave AMDOS (Applications Management and Digital Operations) Q4 2017 leader? Infosys, as we all know, is a global leader in business and technology consulting. It is, in fact, famous for providing next-generation technology solutions. So when an independent global research organization Forrester Research Inc acknowledges Infosys as the leader in their field, it means a lot. For Infosys and their customers. The scoring criteria include digital strategy, business strategy, and industry specialization. The ranking was among 12 top service providers on a global spectrum. There were, in fact, 26 different criterion of assessment. Overall, there were three broad-level categorizations of assessment. Those include offerings, business strategy, and market spread. In fact, there is a reason behind most of the global enterprises preferring Infosys as their technology partner. Rather, that creates a right level of recognition for the organization.
Ravi Kumar S, President and Deputy Chief Operating Officer, Infosys says, “Emerging technologies like AI, Automation, Cognitive Computing and Machine Learning are transforming the IT ecosystem like never before. With this rapid change, it has become imperative for every organization to adopt digitization to deliver enhanced value to its clients.”
He further adds, “Infosys believes in harnessing automation in the service of digital transformation and this recognition as a Leader by Forrester for AMDOS reaffirms our vision and strategy of focusing on our clients existing business systems and processes by leveraging newer technologies. We believe this recognition echoes our position as a partner of choice for global enterprises in this space.”
Forrester Wave Recognizes Infosys A Leader
Forrester Wave report says, “Infosys was among the first of the major global systems integrators (GSIs) to embrace design thinking, and they continue to embrace that philosophy as part of their digital transformation strategy. The firm has continually proven its strengths in legacy modernization, analytics, and ERP and we are extremely happy that our research reflects the same about Infosys.”
Startups are becoming a sore in the eye of technology firms working on legacy frameworks. In fact, most of the startups are lean in structure. Usually, all startups are like that only. But it is wonderful that at that stage, all employees are actually doers. It is later that we start planning to create a think-tank or think-pool in the organization. A hierarchy starts taking place. And then there you see layers formed in the organization. You find top-layer or C-suite, middle layer, a bottom layer, etc. Does growth mean adopting all kind of measuring tools? Which actually start creating goof-ups and confusions. It is good to see world-class Australia and New Zealand Banking Group (ANZ) investing in the Data Republic. The Data Republic is a big data startup. The purpose is to enhance performance with the help of innovative techniques and advanced technology while leveraging big data analytics.
While there is no dearth of data in any established financial institutions, its usage is not appropriate in terms of data analytics and decision making. Be it any data in any form, the most important things to take care are its storage, categorization, and sharing. Leveraging cloud technology is important. In fact, organizations moving faster to the cloud are doing better and organizations still having an on-premise model. Data storage requires few things by default. It includes data safety, security, availability, and authority. Categorizing data properly can provide wonderful insights about customers and business. Sharing is the need of the hour. But it is important to understand what to share? How much to share? And where to share? In fact, sharing appropriate data with customers can save a lot of time and manpower. The key point is to get what you need out of the data you already have.
Data Analytics Can Help In Making Real-Time Smart Decisions
Rather organizations start changing their data models instead of understanding that there is already a lot of power in their hand in terms of the existing data. This existing data, in fact, can give a lot through data analytics.
A recent survey by Oracle, Intel, and Longitude Research brings out interesting figures about Indian businesses adopting and completing Cloud Strategies. It says more than 40% of the organizations in India have been able to deploy it successfully. That is, in fact, an impressive figure. What it means is that these organizations were working on an on-premise model earlier. And they able to achieve the same or higher amount of efficiency after moving their servers to the cloud. The survey says that these organizations in the successful category have more than 70% of their business applications in the cloud. That too with a betterment in performance. In fact, it is one of the reasons they are excelling faster in growth than their competitors. As a matter of fact, most of the businesses in India are realizing this fact. But before everything having right strategies in place is important.
Adopting innovative technologies like open source as part of your cloud strategies is possible. But ensure that open source doesn’t mean necessarily free. In fact, in absence of proper research, adoption of an unsuitable open source tool might become a costly affair for the organization. Costly in terms of time, money, and resources. Cloud vendors are quite capable to handle an organization’s multi-platform requirements. Obviously, no organization runs on a single application. There are always some legacy and supporting applications in addition to the key or core business application. And for this kind of environment, you always need integration tools, techniques, and developers. Around 25% of CIOs/CTOs understand the need for automation. They feel it is the right time to go for it. In fact, around 20% of top IT executives are ready to adopt Artificial Intelligence (AI) and Machine Learning (ML) in their respective organizations.
Right Time To Implement Cloud Strategies
So, that means, AI, ML, Big Data Analytics, etc. are by default becoming an integral part of cloud strategies of Indian organizations.
Digital disruption has been able to bring a lot of opportunities for new business spectrum. Internet was probably the biggest disruption in bringing the whole world connecting with each other so easily with each other. In fact, different time zones and geographic distances are no longer an issue.Travel itself has come down drastically. Organizations are leveraging technology to use video and audio conferencing tools for conducting serious meetings and interviews. A simple rule of business is that problem for one is an opportunity for the other. Look at the phenomenal consumption of media today. Using Gigabytes of data in a single day is normal. On the other hand, look at the huge amount of videos and podcasts adding to the internet on daily basis. Storage space, internet availability, bandwidth, online streaming, backups, and least downtimes are the new perimeters of the digital world.
Obviously, when we talk about digital disruption and digital transformation, security is a big concern. That itself creates a large demand for security solutions. This includes encryption, malware, and other issues. The travel industry is another example of digital transformation. Flight and hotel bookings are happening more online than offline. Accessing data has become a most critical requirement of any business. Every business needs real-time environment. More than business, it is, in fact, consumer’s requirement. Customer-centric businesses are doing better than others. Creating a flawless revenue stream without much investment has become easier now. In fact, all services are now data-based services. Every transaction is data-driven. IoT and connected devices are creating new innovative avenues for businesses and consumers. Those businesses not leveraging this technology today will find themselves far behind than others. It is, in fact, all about changing mindsets and moving ahead.
Digital Disruption Is A Big Differentiator
Digital disruption will transform businesses like never before. It all depends on how you innovate to leverage it.
Digital Transformation is the key to redefine business. Definitely, any transformation must have a positive impact on any or all of the business factors. Like, growth, revenue, customer retention or acquisition, etc. And this impact has to be substantial. Overall, the purpose of any transformation is to get appropriate returns on your efforts in terms of these tangible or intangible factors. Flexibility in the business to accommodate the unstoppable demand for rapid changes is very critical. In fact, if you have an ERP in place which is rigid in nature, it is going to be dangerous for your business. It might freeze your processes thus forcing you to walk a tightrope without a scope of an iota of flexibility. Obviously, this increases the number of threats and vulnerabilities in the whole ecosystem. This, in turn, makes the whole system quite cumbersome and incoherent thus creating high confusion.
If that is the phase your business is passing through right now, you need a digital transformation to redefine business. It is not merely a buzzword. In fact, you need to understand it well before deploying it. The best way to understand it is to see where your business is right now and what you want to achieve. Basically, you must be able to figure out the most daunting issues in business. Connecting the dots with the help of latest technology then becomes easier. The big picture is that digital transformation is already transforming businesses across the globe. Yet most of the businesses are not ready to leverage its power. Strategically, your business processes, activities, and competencies need to accelerate to pace up with the transformation. The things are actually changing faster than you could anticipate. Imagine manufacturing units in a portable format. That is a phenomenal transformation.
Digital Transformation is not merely a Buzzword
A recent example of digital transformation is Boeing. To create its airframes, it now uses all-virtual design framework. This has been a huge transformation in the organization reducing its time to market by 50%. This is important to redefine business.
Blockchain has become a widely discussed topic, but what exactly does “blockchain” mean to an enterprise? Today, I speak with Jim Scott, Director, Enterprise Strategy & Architecture, MapR Technologies on what it is, why it’s important and the rules that govern its use within the enterprise.
1.) With the creation of Bitcoin, “blockchain” has become a popular topic. Could you discuss why you believe blockchain has or soon will become critical in enterprise environments?
Blockchain brings with it a capability that simplifies regulatory oversight. The concept of irrevocable proof is valuable to most enterprises who need to prove that what they say happened actually did in fact happen. This can be done with a blockchain solution, thus simplifying regulatory processes which require this type of detail. In addition, blockchain solutions reduce the friction in enabling the exchange of information. Because of the simplicity in the exchange of information, there is also a reduced latency in the completion of exchanges leveraging this approach.
2.) What laws and regulations are enterprises facing in regards to its use? And, do you feel these will impact its adoption?
This technology does not conflict with any laws or regulations, but instead aids in the support of those laws and regulations thrust onto businesses. Being able to provide highly available, fault tolerant and immutable storage for events that occurred is tremendously useful in many industries and will assist in proving the difference between facts and data which has been tampered with.
3.) Could you break down the three key areas related to blockchain?
a. Shared Distributed Ledger – a blockchain is really nothing more than a shared distributed ledger. In its simplest form a ledger is appended only, and for this use case, it is highly available, where multiple people can write to it. It can easily be compared to that of a checkbook ledger, except in this case it is digital, runs on multiple machines at the same time, is highly available and allows many people to write to it, which is considerably different than an actual checkbook ledger.
b. The concept of consensus – in order to allow things to be placed on a blockchain there must be a notion of consensus or agreement for when things may be placed onto the blockchain. Interestingly, voting is a form of consensus and is also considerably simpler than something like proof-of-work which is more broadly applicable to public blockchains for cryptocurrencies.
c. The concept of smart contracts – this is a unit of code which runs when some event triggers it and is fully audited. This no more complicated than any other piece of software written within any organization. It can be simple, or complex. The key is that there is a trigger event, and a full audit history on the blockchain to prove that the work occurred and when.
4.) What advice would you offer to organizations beginning their blockchain journey?
Realize that education about this topic to stakeholders is critical to the success of a blockchain solution. Data is accessed and managed in a way that may be somewhat awkward when compared with data management systems of the past. This model is more akin to a streaming architecture and brings with it many benefits. Expect from the onset of any project in this technology space that you will be required to perform large-scale analytics, as well as apply enterprise level security practices to the data being placed on a blockchain. Any attempt to ignore those details will yield certain failure down the road.
5.) Anything else you would like to add?
Do not ignore people within the business who have concerns about this technology stack. Education is critical and driving this with a culture of inclusion is required for success. They can be assured that while blockchain as a concept is relatively young, it is really just a repackaging of concepts that have been around for quite a while, and due to this it delivers a new set of business benefits.
About Jim Scott, Director, Enterprise Strategy & Architecture, MapR Technologies
Jim Scott is an experienced leader having worked in financial services, regulatory, digital advertising, IoT, manufacturing, healthcare, chemicals and geographical management systems. He is a co-founder of the Chicago Hadoop Users Group (CHUG) where he helped grow a now flourishing community around next-generation technologies. Scott has built systems scaling to 50+ billion transactions per day, and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts. His passion is in building combined big data and blockchain solutions.
If you are heading technology in an enterprise you must read this. Enterprise Data is a futuristic deal. What you enter today may not reap fruits tomorrow but later. Especially in case of analytics. There has to be a substantial data in place if you want to drive business decisions leveraging technology. Moreover, if your workflow proves flawless data going into various databases through different processes built on technology platforms, it is a semi-relief. There are various important aspects that you need to take care of some very basic things right from the beginning. Firstly, is it accurate? Secondly, is it complete? Is it inserting in right columns in right places with right values? Above all, does it take care of data integrity and entity relationship at that juncture? Now, this is on the simpler side where data goes in your databases via business applications. What about manual data?
Two immediate real-life enterprise scenarios that come to my mind in this context are as follows. First is about an education company having multiple centers across India. After 30 years of their operations, they realize that they don’t have any concrete data in a structured manner of their alumni. That looks quite weird? Isn’t it? A technology education company doing this kind of stupid thing. Even after one year of hard effort by their IT and operations teams, they couldn’t get that data. That is the harsh reality of how organizations work. And for namesake, that organization has an in-house student lifecycle core business application. On top of that, they have SAP in place. In fact, that is the respect of Enterprise Data they had for so many years. How can such a severe carelessness fetch some benefits for that organization?
Enterprise Data Needs Protection and Care Right From Inception
Another case is a government organization managing skill building of youth of the country and preparing them for an upcoming international competition next year. The enrollment process is in two ways. One, through their online portal, that is nice. Second, on paper, in a manual process. Now, this manual data is a sleeping volcano in the making. There is no discipline of ensuring that this manual data goes right at the place where online data is residing. With such anomalies, how can you expect to get some useful reports at a later stage displaying complete data? In fact, being a big organization doesn’t ensure you will not make stupid mistakes to repent later for life.
ManageEngine is one of the three divisions of Zoho Corp. It is the real-time IT Management division. Today, it launches Patch Management on Cloud thereby strengthening Endpoint Security. Zoho is few of those companies in the world that work on 80:20 rule in a reverse manner. While most of the organizations spend higher amounts on marketing and least amount on R&D, Zoho does exactly the opposite of this. It is those of the rarest companies that spend the highest share of their business on R&D and least on marketing.
That is one of the reasons for their plenty of success stories. In fact, that is the reason that has helped them sustain their profitable organization right from the first year of their launch in the late 90s. There is another great exception in this great business house. That creates a higher level of confidence in the organization.
That is also one of the reasons that they have a full control on all the products. All their products (and they have a large bouquet of it) are in-house with no acquisitions and buy-outs. No external collaborations and third-party products in the product range they offer. With the launch of Patch Management on Cloud, ManageEngine will help organizations protect against cyber attacks. And that too for less than a Dollar per device per month.
The devices include a laptop, desktop, and server. I think that is too nominal to spend on a device for the utmost security and safety of the organization. This is an automated patch management solution. The solution is available currently for Windows platform and most of the third-party applications. Admins will have complete control of Patch Manager Plus on Cloud to help them in automating the overall patch management mechanism.
Patch Management on Cloud from ManageEngine Is Proactive
The whole process begins with patch detection and finishes with successful deployment. The best part is that there is no infrastructure investment in Patch Management on Cloud from ManageEngine. That is how it secures enterprise devices from enormous vulnerabilities and cyber attacks at a very nominal expense. In fact, it is less than a dollar per device per month. A recent report by Computer Crime and Intellectual Property Section (CCIPS) states that more than 4,000 ransomware attacks occur every day since 2016.
Ransomware like WannaCry and Petya were strong enough to create panic in organizations worldwide. In fact, the basic reason for these attacks being successful is not because patches were not available. It is because of not executing those patches in a timely manner. Human lethargy or ignorance is rather one of the biggest threats to an enterprise. That is why there has to be an alert mechanism.
It is a failure to react in time that causes biggest losses to an organization in terms of cyber attacks and ransomware. Most of the organizations that fall prey to it have either no mechanism in place or the controllers of that mechanism has is not responsible enough to understand the gravity of the matter and thus no accountability. That is why CCIPS strongly recommends that there has to be an automated and centralized patch management mechanism.
This will help in creating a better control of the whole ecosystem of the organization. On the same note, Patch Management on Cloud from ManageEngine takes the reigns of an effective prevention system and timely response mechanism. It definitely helps in mitigating the risks in a significant manner. That is how it prevents organizations from cyber attacks. In fact, it also controls human careless or ignorance by creating timely alerts.
Patch Management on Cloud from ManageEngine Is A Smart Solution
Another reason for the necessity of a strong solution like Patch Management on Cloud from ManageEngine is the increase in the numbers of the mobile workforce. While on one hand, the world has become data sensitive and on the other hand, there is a tremendous increase in mobility. On top of it, the efficiency and productivity of an employee should not compromise due to his or her increased mobility. For such kind of stringent and risky environment, security is of utmost importance for an organization.
Also, there is a continuous increase in the number of organizations moving to the cloud partially or completely. That calls for a change in strategies to manage their business continuity plan. Overall, it is a question of high security, least downtime, and timely actions. Patch Manager Plus on Cloud does the same for organizations. It instantly procures, tests, and approves patches for deployment.
The beauty of Patch Management on Cloud solution from ManageEngine doesn’t end here. It not only does this for Windows but also for more than 850 third-party applications in an automatic manner. In fact, it is quite proactive in terms of vigilance. It runs a health check of the all the systems every 90 minutes. Hence every 90 minutes there is a report indicating vulnerabilities in machines and raising an alert for the same. The cloud version thus ensures timely action and zero-day delays.
IT teams can remotely patch devices like servers, desktops, VMs, laptops, etc. from anywhere by installing agents remotely. The product is available at www.manageengine.com/patch-management. Pricing begins at $34.50 per month for 50 computers. That means it costs $0.69 per device per month. If you are not aware of the strength of Zoho products so far, you can go for a fully functional 30-day trial.
Patch Management on Cloud Comes With Automation in the Endpoint Management
Rajesh Ganesan, director of product management, ManageEngine says, “With dynamics of digitalization fast changing and massive adoption of cloud technology, there is a greater need for automation in the endpoint management space, as endpoints are the major entry points of cyber attacks. ManageEngine’s new cloud-based patch management solution is engineered to meticulously look out for such threats on the move, thereby keeping both data and endpoints secured.”
There is a free edition of Patch Management on Cloud available. This free version of Patch Manager Plus on Cloud is for startups and small businesses. In fact, it offers a solution for 25 devices for free. That has been a policy from Zoho right from the inception of this company. They are very customer-centric with less focus on price and profits.
Every business has a core application that covers the key business processes. It also engages most of the key users. These key users are actually the key drivers of this application. They are responsible for keying in the data inputs. On the basis of timely and correct inputs, they draw out the relevant reports. These users, in fact, also ensure that if there is a change in any of the business processes, a patch or an update in the application takes care of it. In any case, the application has to behave as per the current business process. But that doesn’t mean that if you run a report of a previous period when the business rules were different than current, it will give you a wrong report. The coding, in fact, has to take care of every rule and period for that matter. Everything is fine so far.
But in reality, there are two different termites that start hallowing this beautiful and efficient business application. These are the frequent change of users and non-serious attitude in the documentation. In fact, both these factors are interdependent. The key users are the main knowledge banks of this application. They know which function takes care of which business process. Initially, the documentation like user manual and technical documents could be up to the mark. But as the time passes, new requirements pop up thus requiring code changes. But a recording of these code changes in relevant documents misses most of the time. Basically, it is a matter of discipline that starts diminishing and causing this situation. And that is the time when a core business application starts becoming painful and headache. The management stands in a dilemma.
Check Minutely When A Business Application Becomes A Headache
There are easier ways to find it out. When you have more than one meeting in a week for discussing user issues and handholding, it means the water is above the danger mark. If your improvements get stuck because developers are spending all their time in troubleshooting, that is a serious concern. That is the right moment when you need to divorce your legacy business application and go for a fresh one. Though it is simple to identify an infected or diseased business application, most enterprises prefer to ignore it and live with it despite it creating a lot of troubles for everyone.
The migration to the Enterprise Project Management Office or EPMO model is gaining higher success. There are plenty of valid reasons for that. In fact, the model emphasizes a higher level of focus on projects, programs, and portfolio management. In addition, it aligns well all these with the organizational goals. Project Management is still a pain for many organizations across the globe. At times, despite their best efforts, project teams and thus organizations are not able to achieve success. There remains a wide gap between anticipations, aspirations, and actual achievements. That is where this model becomes handy and proves to be helpful in organizing the things in a better way. Of course, there is no harm in trying something good in that direction if it seems to be fruitful. Now, there are ample case studies and business cases showing it to be a success and a better option.
A PMI report released in 2017 state the success stories in that regard. Organizations that have a matured Enterprise Project Management Office in place have around 40% higher success rates in their projects. Similarly such organizations have around 35% less failures in projects if they compare it with their success or failure rates prior to adoption of this model. The best part is that the model is suitable for almost everybody seeking success in their projects. That may include startups, small or medium enterprises, and of course, large enterprises. Every organization of whatever size wants to map projects with organizational goals and visions. Rather the success is more critical for startups. And also for the large enterprises who have higher stake in the success of failure of their projects. It obviously impacts a lot in acquiring new business and clients, retaining existing clients, meeting business goals, and gaining profits.
Enterprise Project Management Office Model Is Evolving Fast
The Enterprise Project Management Office suits large organizations well. But it is for every organization that want to be champion in project management.
Before going deeper into the ocean we call as an enterprise, it is important to understand what is the sole purpose of an enterprise application. In my opinion, it is to ease the life of business, management, and employees. If any of these three dimensions pain, bleed or cry; then the purpose fails. Many times enterprises invest in technology without proper assessment. Obviously, an improper assessment would lead to the improper roadmap. You might begin a beautiful concept and purpose in case of an in-house development of an enterprise application but there are two glitches that make it a failure in the longer term. Usually, these development projects are of a longer tenure. At times infinite. Especially when an organization starts with evolving strategies and requirements. In such cases, two things matter most.
Firstly, retaining the key knowledge holders of the business, product, and coding. In most of the cases, people move in and move out of the organizations. Thus making it a difficult task to retain them. Especially good people are high in demand in the outside world. And everyone is not loyal to the organization. But there are ways to engage and retain them for long. Secondly, since the first factor is not hundred percent under control and has higher risks and vulnerabilities, documentation is of utmost importance which most of the organizations either don’t understand or ignore. As a matter of fact, ignorance is not a bliss. Especially in this case. But mostly realization comes at a very late stage.
Enterprise Application Can Become Killing
Usually, at that juncture, it is like a deadlock at both ends. You neither can go back and restart. Because that will impact business badly. Nor can you continue. Because the same set of people are not there in the organization now. There is no dearth of knowledge and technology. Only thing is right people, a right strategy, and timely execution.
Enterprises that don’t leverage technology fail fast. Sometimes to the extent of death or an organization. Actually, it is a kind of slow death. The organization, in a way, starts phasing out against their competitors in the market. Gradually, profits start dipping. That creates pressure within the organization. Good people start moving out finding better opportunities. The organization can’t afford to hire new people in lack of profits. Ultimately, the organization starts getting slimmer but not for the sake of good. Because it leaves them only with lazy people who either are happy in any kind of atmosphere. Or they are bad enough to find any opportunity elsewhere. That kind of people have nothing to do with the growth of the organization. They are cool as long as they are getting their salaries. That starts spoiling the internal atmosphere. Finally, the organization reaches to a stage of closure.
Now, this is quite strange that technology is there and an organization is not able to leverage it. It is, in fact, a misnomer that technology solutions are always costly. There are cost effective solution, as a matter of fact. Rather, it depends on the awareness. Also, it all depends on the C-suite to drive their organization towards technology. The faster you align technology with business, the better it is. The fault doesn’t always lie with technology wing of the organization. At times, a CIO/CTO wants to propose some solutions but it lacks conviction and clarity. That is another reason of failure. Your technology wing has to be strong not only in technology but their knowledge about the business also should be up to the mark. On the other hand, various function heads have to be demanding and tech savvy. They should demand for process automation and enterprise solutions.
How Do You Leverage Technology Defines The Fate Of Your Business
Obviously, how you leverage technology and to what extent defines the ultimate fate of your business. And even if you have been doing a lot in that direction, you can’t sit idle after some achievements. In fact, it becomes more important to take another leap in that direction. After all, you can’t sit on your laurels.
Recruitment of new team members is a normal and ongoing activity in project management. You have new projects, and you require a new team. Also, you need to replace the team members leaving the organization for a better prospect or for some other reason. On the other hand, engaging existing team members within projects is also another task. Actually, there is no alternative to it. When there is a vacancy and you have to fill it, you have to recruit new people. More important is to understand what to prefer at the time of recruitment. Like, I prefer to hire smarter people than those having plenty of knowledge. I find that people with more knowledge are difficult to convince or change. And we all know that circumstances demand change in work style, strategy, and even the plan. Smarter people are easier to convince and fast to drive.
In fact, I find that the new team members are easier to shuffle between tasks. In addition they are quicker to find a solution. Of course, the basic knowledge has to be there about the role and responsibility. In today’s world higher education or a graduate from a better institute doesn’t guarantee best results. That is why there has been a tremendous change in recruitment mechanism and selection criteria. In fact, now a days we need team members who have higher rate of adaptability. Because technology is changing very fast. You might have excellent team members. But what about those who are expert only in a single technology. They have, in fact, a higher vulnerability of phasing out faster. Even the best of the best can’t guarantee their berth. The moment you switch to another technology, the whole paradigm changes. It is the Smart who Wins.
Selection Of Right Team Members Is Critical
Above all, it is not the knowledge that runs a project. It is the smartness that makes you a winner. Of course, basic knowledge has to be there in place.
It was all about ICO, virtual economy, India digital economy, high-speed internet, e-commerce, data, smartphones and a lot more at 2-day India Digital Summit 2018 in New Delhi. India aims to build a trillion digital economy in next 4 years from now that is by 2022. As we all know 2017 was a revolutionary year for India. It was about a availability of high-speed mobile internet in the country. That includes a penetration to rural India. So now India has become a broadband nation. A mobile data consumer in India consumes about 4 GB of data per month. Soon it will increase to 11 GB per user per month. Now that is phenomenal. That is only possible because 1 GB of 4G data costs just for Rs 10. There are 425 million internet users in the country. Additionally, there are 335 million mobile users.
Going further YouTube has 225 million users per month currently. It is growing astonishingly at a rate of 400% year on year. That is phenomenal. Do you know a fact that 235 million internet users in India access it in local languages? Another amazing fact is that 95% of video consummation in India is non-English. UPI transactions are growing 40% month on month. In fact, India is the only country in the world where you can open a bank account in just 3 minutes. And you can get a personal loan in 5 seconds. This is the country where marriage portals make profits and dating sites fail to establish. E-commerce has become omnichannel. Food tech is back. ET start-up of the year was Swiggy, a food tech online company. More than 10 billion funding went into startups in India. So much learning at India Digital Summit.
India Digital Summit 2018 was Very Insightful
Hearing Rajan Anandan, Chairman & founder IAMAI and Vice President, Google India & South-East Asia is always insightful and delightful. In Leadercast session 1 Isobar India MD Shamsuddin Jasani was talking about what went behind making it India’s leading digital agency. Firstly, he says, you should be honest to your clients and partners. That, in fact, can’t happen without creating a strong culture within the organization first. Because only then every employee of your organization will automatically become an ambassador for his or he organization. There was a lot more at the India Digital Summit 2-day event.
An Interview with Connor Cox, Director of Business Development, DH2i (http://www.dh2i.com/)
Q: Tell us a little bit about Microsoft SQL Server—both its benefits and its pain points.
While Microsoft SQL Server is among the most widely deployed database management system (DBMS) platforms, it’s true that until recently, its benefits came along with challenges that once seemed unavoidable. So let’s start with some research on SQL Server: Gartner has found that of the $34.4 billion DBMS market, SQL Server captures more than 20 percent of it, behind only Oracle. It’s also growing rapidly—in fact, faster than the overall market and faster than even Oracle. Microsoft SQL Server revenue expanded 10.3 percent in 2016 alone.
Q: Impressive growth for the category, but what have the primary problems historically been when enterprises use SQL Server?
CC: A major pain point that enterprise users have experienced is something termed “SQL Server Sprawl,” which is a lot like it sounds—it’s the uncoordinated, explosive growth of SQL Servers. The reason this sprawled environment develops is that many enterprise customers have new database demands, and as part of that they may find themselves requiring a new instance of SQL Server. To support a new instance, a new physical or virtual server is typically deployed. As the SQL Server footprint multiplies, this can rapidly spiral out of control, as most IT organizations deploy only one instance per server.
Q: Can you talk about the types of pain that result for the enterprise from SQL Server Sprawl?
CC: There are a couple of possible negative outcomes that enterprises can experience from sprawl. First, it can become very expensive, especially when you consider the costs of SQL Server licensing. Even though these line items are much more cost-effective than Oracle, this can still be pricey in light of licensing rules that have changed in recent years. As you can imagine, the SQL Server bill often goes up as the number of SQL Servers rises. Plus, since large multi-core servers are common these days, it can be expensive to pay for per-core SQL Server licensing.
That’s not all, though—there is also the drawback of management complexity. IT administrators in charge of managing SQL Servers also suffer in sense of the time required for managing and maintenance, from patching and updating to troubleshooting, maintaining security, and doing migrations. These generally less desirable IT activities take up more and more of admin’s bandwidth, and also end up needing to be scheduled on nights or weekends during planned outage windows.
Q: What options do enterprises have to deal with these troublesome issues caused by sprawl?
CC: Organizations have tried more than one approach to counter the effects of sprawl, but some work better than others. So let’s start with the one that I believe is the most effective: DH2i’s software-based approach, DxEnterprise, which offers both consolidation and cost savings. It works by helping users safely stack 5 to 15 SQL Server instances, on average, per licensed operating system—you can stack even more than that, though, depending on needs, so some organizations run as many as 50 instances.
Q: How does that work in practice?
CC: The key is that the DH2i solution lets users quickly move instances—either automatically due to failure or manually—between hosts. And when it comes to the hard cost savings that result from both physical and logical consolidation, these are actually twofold. First, enterprises save on an operational level since less IT management time is required—administrators don’t have as many servers to manage. Second, fewer cores to license for SQL Server means that enterprises enjoy significant cost savings.
Q: Beyond combatting sprawl, are there other benefits that organizations might experience if they use the DH2i solution?
CC: Yes, there are several other advantages with DH2i. The biggest ones that come to mind are avoiding expensive SQL Server Enterprise Edition, and support for mixed-version clusters. Other benefits include built-in high availability (HA), easy disaster recovery, and simplified patch and upgrade management. In short, it’s a way to avoid compromises for SQL Server users—not only do they have a simplified management framework for their workloads, but they can also achieve HA and cost savings. And since it’s a software solution, enterprises need not change their infrastructure—they can use their existing infrastructure and SQL Server instances.
Q: You mentioned that there are other fixes that companies sometimes try to use in an effort to combat sprawl, presumably with less luck. Can you talk about these other options and what’s problematic about them?
CC: Companies can try different types of approaches to help to push back against sprawl, but some of the other solutions go hand in hand with high costs—both hard costs and soft ones. There are three other types of solutions that I’ll mention: instance stacking, database merging, and large Enterprise Edition WSFC Clusters. So let’s take them one by one:
Sometimes IT shops use instance stacking to help reduce the number of operating systems and licensed core counts, since Microsoft allows up to 50 SQL Server instances per OS to be installed. The problem here, though, is the creation of a scenario where all of an enterprise’s eggs end up in a single basket, and one outage can thus impact many instances. If you get the stacking ratio wrong the first time, it’s also hard to move instances.
The technique of database merging, on the other hand, centers around moving many databases into the same instance. This approach can help to reduce the number of instances and also lower how many servers are needed. The issue here, however, is that database merging is somewhat high-risk as a strategy, since it gets harder and harder to coordinate planned outages. What’s more, if a server or an instance fails unexpectedly, it can affect many users.
Another approach that some organizations use is large Enterprise Edition WSFC Clusters. Here, companies can create a consolidation platform when they create large Windows Server Failover Clusters with multiple nodes. This too comes with a high cost, since every server involved must be licensed for Enterprise Edition and run the same version. WSFC also has numerous other complexities when it comes to both management and deployment.
Q: So clearly, you recommend the DH2i solution as an alternative to those three techniques. With that in mind, can you share a use case of DxEnterprise?
CC: One great example of a customer using DxEnterprise is a large health system called Asante. Before deploying the DH2i solution, Asante tried other approaches—specifically, they had large WSFC clusters, and a pending Microsoft true-up priced at around $400,000. After deploying DxEnterprise and consolidating on the platform, Asante is now running about 15 to 20 instances per server. After they consolidated with DH2i, the price of their true-up dropped to a relatively inexpensive $20K—a huge cost savings from their original quote. Not to mention the fact that Asante received the bonus of getting built-in HA.
That’s one example, but DH2i works with companies big and small across diverse industries. Regardless of the differences between these organizations, what they all have in common is that they all get to enjoy the benefits of using SQL Server, and can now manage SQL Server costs and availability much more effectively.
CloudPassage Halo simplifies Cloud Workload Security. It is now available in AWS marketplace. It is a single agent software with a single billing feature. That way this single agent or single console software provides 3-fold simplification in managing Service-based Cloud Workload Security. That is simplification of Cloud Security Budgeting, procuring, and Deployment. CloudPassage is just an 8-year old company. But within a short span it is a leader in cloud and container workload security. Halo is their award-winning security automation platform. In fact it is a complete Server Secure cloud workload security solution for enterprise. With its availability on AWS marketplace, it offers best cloud strategy solutions meeting all kind of user needs. Launching in 2010, CloudPassage was the first organization in the U.S. to get a patent for universal cloud infrastructure security. Since then the company is a leader in creating innovative solutions in cloud security automation.
In fact, CloudPassage is not only a leader in cloud security automation but also in compliance monitoring for ensuring high-performing application development environments and deployment. CloudPassage Halo promises universal visibility and non-stop protection for servers. These servers, in fact, could be in any combination of cloud, containers, data center, and hybrid architecture. That means it is capable of handling any kind of simple to complex architecture of servers in any form and environment. The best part is that Halo platform is available as a service. This includes quick deployment as fast as in minutes and is easily scalable to next levels. In fact, all this is seamless. In fact, it is easy to integrate with infrastructure automation and orchestration tools that are most commonly in use. These include Puppet and Chef. It is also easy to integrate with CI/CD tools like Jenkins.
CloudPassage Halo Helps Securing Critical Infrastructure
As we all know application development and workload deployment are the most vulnerable areas that need better handling. CloudPassage Halo helps in securing such enterprises by providing the best of the solution. That is why it is one of the top choices of many global companies operating in different verticals. Those verticals include finance, media, insurance, e-commerce, transportation, hospitality, and high-tech solution providers. The product is able to integrate with AWS in a flawless manner. In fact, that enables it to leverage AWS in order to manage single agent/single console security for hosts, containers, and workloads ensuring best possible speed and scale options. CloudPassage Halo Server Secure pricing is quite simple and transparent. It comes to $40 per agent per month with a minimum option of 25 agents. That means even if your requirement is below 25 agents to start with, you will have to pay $1000 a month.
Obviously as you increase the number of agents, the pricing will be better. As a matter of fact, purchasing options and pricing tiers information is available here. CloudPassage is among advanced Technology Partners in the AWS Partner Network (APN). In a short span of operation it was able to acquire AWS Security Competency status in 2016. It supports SaaS subscriptions on AWS marketplace. In fact, it is among the top security providers for private and multi-cloud platforms. CloudPassage Halo leveraging cloud-native provides API-based security controls. It is simple to integrate with mot popular CI/CD automaton tools, SIEMs, GRC solutions, and analysis & reporting tools.
CloudPassage Halo Server Secure for High-Performance Cloud Workload Security
John Janetos, Director, Business Development, CloudPassage says, “Now, AWS Marketplace customers can buy and deploy CloudPassage Halo Server Secure for high-performance cloud workload security in a highly predictable and cost-effective way – via a single integrated AWS bill. As global enterprises rapidly embrace the cloud for mission-critical workloads and DevOps for application development, automated security that can operate at speed and scale is becoming critical path. AWS Marketplace helps eliminate protracted negotiations to make it easy for our customers to securely embrace the cloud.”
Barry Russell, General Manager of Global Business Development, AWS Marketplace and Service Catalog, Amazon Web Services, Inc. says, “As customers migrate and deploy mission-critical workloads on AWS, they’re looking for the maximum level of security available without increasing complexity. Solutions like Halo Server Secure can help customers automate their critical workload security policies for their business-critical apps as they take full advantage of the scalability and agility of AWS.”
I am in a partnership with my project managers to define their measures of success. Each of the project managers has to have certain goals in life. These goals should be clear and measurable goals. Unless those are clear, you can’t ascertain how to measure them. And unless you measure them, you can’t measure a change in your life. These goals should not be same in case of different project managers. In my opinion following 3 things, my project manager should be able to do at the end of the partnership. And there has to be a measuring mechanism for this.
Following are my suggestion points for measures of success:
- Start listing unanticipated sparks in work life. This could be a sudden breakdown, an angry customer, a meaningless meeting, or a heated argument with peers/team members. Find out the reason for each and try eliminating those causes on the basis of 80:20 rule.
- Start listing your tasks (including your whole team) for the day and prioritize them. Try not to duplicate tasks in multiple person’s kitties. Meaning each person, each unique task. If there are two persons involved in the same task, obviously both wouldn’t be doing the same things. Break this task to sub-tasks to allocate a unique piece to each of the team members in that case.
- Categorize your daily/weekly/monthly work in two sections. One for investment, another for saving. Investment is the work that comprises of routine jobs with no new learning or enhancement. Saving is doing different tasks preferably a non-routine one. Or doing the same repetitive routine task in a different, innovative, and more fruitful manner. Next, we will talk about short term and long term savings in work life.
Measures of Success Are Important To Rise
Basically, this is just a beginning of the measurement mechanism. These are just a few measures of success. There are a lot of other things that will come on this list but in a phased manner.
How do you ensure complete testing of a software product? Obviously, you need to have a good strategy in place for that. Because mostly you find a bug after you deploy and handover a product at a customer site that your QC team didn’t simulate during the testing phase. Why does this happen? Well, there are many reasons for that. But most of these reasons fall back to similar kind of drawbacks. So, if you focus on these drawbacks and try to cover them up during various phases of the project, you can avoid these later stage goosebumps.
We all know that fixing of these later stage discovery of bugs requires more cost and energy that it would have actually in first place. Although we all want to release a no-bug product to the customer it never happens. There are always some late hours discoveries that either delay a launch of the product.
In worse cases, there is sometimes a recall of product too. This happens in order to fix some serious bugs and post that plan a relaunch. A complete testing or coverage is important. And this hunt begins at requirement gathering stage. Business and customer requirements are the backbone of any software. If the team is not able to collect all relevant information, it creates troubles at a later stage. If you are sure that customer requirements are complete and accurate, then the next step is to translate it properly in coding language.
This is important so that the application behaves as per the anticipation of business and process stakeholders. For this, it is important for developers to understand the business or customer requirements document well. A small carelessness here can cost manifold at a later stage. It is like disassembling the whole vehicle or machine after assembling it.
Complete Testing Has Certain Prerequisites
Most of the times, testing team refers to development document only for testing of a product. That again leaves a gap for complete testing. Referring to original business requirement document and coding document is very important.
Enterprise DevOps will be penetrating deeper in order to scale up faster. By the end of 2017, more than 50% of organizations on the global front are using DevOps. Either their deployments are in process are implementations are already over. Many of these organizations are already reaping the fruits of their efforts in this direction. That is why Forrester calls 2018 are the year of Enterprise DevOps. In a way, that makes a lot of sense. Of course, any technology during its adoption and deployment faces hiccups. These hiccups include technological barriers, users resistance, and management’s fear to adopt something new. Of course, all three have a set of reasons to support. Users always resist changing. Similarly, management always fears to invest in newer technologies. In fact, in a different perspective, faster deployments and larger volume adoptions of any new technology bring a larger scope of collaboration and brainstorming.
With more participation in Enterprise DevOps, barriers will take a backseat. The technology will have more buy-ins from the management. It will become easier to get budget allocations and approvals on the basis of success stories across the globe. In fact, global vendors also play a major role in this in terms of trust building and confidence boosting. As a matter of fact, IT investments are becoming a topmost priority for CEOs. Enterprise DevOps has a good balance of risk and agility. As a matter of fact, there will be a tremendous decrease in risks during the year. A lot of open-source platforms and tools lack standardization and thus decrease in adoption. Especially when there is a clarity on understanding that these tools and platforms are not free. On the other hand, Enterprise DevOps will tighten its grip on governance and standardization. This leads to a major shift.
Enterprise DevOps Will Have A Big Say
More and more organizations are allocating IT budgets for trials and experimentations. Of course, if you are prone to fail, fail fast, learn fast, and move ahead. Don’t waste time. Cost of DevOps resources will lower futher thus bringing the overall enterprise DevOps investments down. Outsourcing experts is a better idea than employing high cost experts. This also brings collaborative ownerships. Finally, Enterprise DevOps will bring in a big culture change within an organization.
A Standard Operating Procedure or SOP is never a one time task. It is, in fact, something to review on a regular basis. Because it works on a simple principle of life. What works well today may not be fit to work tomorrow. That is why a regular improvement is important. And as it says, improvement is a never-ending process. There is always a scope to enhance and improvise. Some basic things are very important for any startup. In this post, I will talk about standard operating procedures that are critically essential for food delivery startups. The food business is increasing exponentially. The credit goes to singular families and busy lifestyles. Everyday there are a few new restaurants coming up. Some out of these are dining while others are delivery only. For delivery restaurants packing and delivery matter most. Because these two create the first impression marks.
Food quality, taste, preparation, and presentation come afterward. There has to be an SOP for delivery. There has to be a least possible fluctuation in delivery time. Try to be as consistent as possible. Any delayed or early delivery has to have a genuine reason. But generally, nobody likes too early or too late delivery than the scheduled time. The same holds true for a project also. Recently it was an annoying moment for me to get a food delivery 20 minutes before its scheduled time. I had to perform many other things during these 20 minutes. So I was literally not ready to get it at 7.40 pm instead of 8 pm. I was in the gym when I got a call from the delivery boy.
Standard Operating Procedures Are Critical Enhancers
He was there at my place to deliver the food. Despite coming so early, he was in a hurry and was planning to leave the food with the security guard downstairs. That was purely immature on his part and annoying for me. The delivery boy should not be in a haste, firstly. Secondly, he should not take such annoying decisions. I was seriously wondering if this restaurant has any Standard Operating Procedures in place at all. In another delivery incident, the cut onions were loosely kept in an open paper packet. Though inside film coating was good to save it from getting soggy but could not avoid water coming out and spoiling other packets having bread and other stuff.
Recently I visited a restaurant in one of the posh colonies in South Delhi. It is one of the best restaurants in the capital city as far as its food quality, taste, preparation, and presentation are concerned. We were a group of 10 friends. Everything was going well except three small incidents that could lead to customer dissatisfaction and discontent. These two incident are about timely communication and In hospitality and F&B when it comes to customer experience, it matters a lot and could lead to a loss of customer and hence business. In this business, or for that sake in any business, every drop counts and no customer is big or small in terms of service and delivery. Timely communication is very critical in business. It impacts business, in fact, in a huge way. If it is not timely, it loses it impact and effect.
The first incident is like this. One of my friends ordered for Guava juice. After 15 minutes and three reminders, the server informs about the unavailability of that particular juice and offered some other flavor. My friend didn’t mind its unavailability but was not happy with so much delay in informing about it. Logically the information should have come instantly. Second instant was a reorder of a starter. Again, after a lot of delays, the server informed about its unavailability. The third incident is most interesting. We ordered two Iced Tea one with sugar and the other without sugar. The drinks got reversed. The one that was sugarless went to the person wanting it with sugar and vice versa. Both the drinks were sent back for correction.
Timely Communication Is Critical For Any Business Or Project
Surprisingly the same mistake repeated again. A suggestion was given to the manager to adopt a different color straw for the sugarless drinks to avoid any confusion and embarrassing moments. In fact, there is a great learning for project managers in these incidents. First is about proper and timely communication. That too in time before it loses its sanctity and value. Second is accuracy in delivery. Latter can happen only if there is a hundred percent accuracy in understanding customer requirement. The third important element that comes out from these two points is training. In fact, a lack of proper training could create bigger risks to the business. So, whether it is a project or business, a timely communication and accurate delivery is quite important for a customer. Hope it makes sense. Would love to read your experiences in similar regard in comments section.
Artificial Intelligence is not far from reality in healthcare. If we talk about emerging technologies in this field, probably AI is on the top. That is, in fact, quite evident as AI Talk was the key component of HIMSS17 in Orlando. As a matter of act, more than 85% of hospitals in the United States are using some form of AI or other. For instance, NewYork-Presbyterian is running some important projects in this regard. Similarly, machine learning is also a fast-emerging concept in healthcare. Rather, the job profile of ML engineers is in high demand in healthcare industry. Main applications that this industry is focusing on using this technology are clinical decision system, claims collection system, radiology, and so on. But as of now the whole gammut lacks standards and metrics. While the deployment and development of technology is taking place, this need to evolve faster.
In lack of proper standards and metrics in place, these emerging technologies might now lead to the fruits to the industry it is capable of. Rather these should be in place before the deployment begins. Moreover, it also creates a threat for employment of human. In case, a machine learning algorithm proves to be more effective and accurate, it might be a problem for human radiologists, for instance. Obviously, any new technology comes with a set of pros and cons. But this is quite serious as this might emerge as a most concerning issue in the industry. In fact, we will have to wait and watch what actually happens in near future in this regard. Kyu Rhee, MD, IBM Watson Health, Chief Health Officer raises a very valid point in this matter. A couple of months back he defines three fundamentals for AI without which it loses its sanctity.
Emerging Technologies in Healthcare
Firstly, it has to have a valid purpose. Secondly, it should have transparency. Finally, the skill. In fact, the purpose should not be against humans. Rather, emerging technologies should be a help to humans. All the systems and algorithms should be transparent. And there will be a new skill emerging as Human + AI.
User manual documentation is an important activity. It is the most critical document. Because it becomes high in demand soon after the deployment is complete. That is the time when the deployment team members are not there at the customer location. Though a proper training takes place and all key users get training. But still, while performing the actual business processes on a new application in the organization needs some or the other kind of help from the user manual. The user manual, thus, has to be accurate, precise, and at the same time descriptive. There should be enough and relevant use cases or examples. Let use take an example. I had to book a taxi. The taxi driver calls me for the directions to my place. I tell him take a right turn from the main road signal light. Soon after the turn there is a bank.
The taxi driver reaches the bank and calls me to inform. I tell him to drive a little further. There is a park. Just before the park there are two gates. I tell him to wait for me there. There is a gap in what I intend to tell him and what I tell him actually. That creates a confusion. Thus when I reach where I think the cab should be, it is not there. The cab, in fact, is standing a kilometer ahead because of my confusing directions to the driver. Now, that should not be the case with User Manual Documentation. It has to be very clear for each step, field, and value. That is why it is not everybody’s task. There are experts for the job. These experts know well what to create and how to present. Despite all good efforts, these documents also need revisions.
User Manual Documentation Should be Process Driven
Mostly revisions in user manual documentation are due to some changes in code or process. But at times, it is also due to some ambiguities in the document that need corrections.
Two QC engineers working on a code testing project complete their task with the submission of their testing report. The project was small but critical. Everything is happening as per plan. The next step is to launch the product for a small group of end users. In fact, this is going to be the most crucial phase of the project when the real effort of all the teams will wait for appreciations. But everybody has only one thing in mind. What if something doesn’t work? What if something goes wrong? If it is not too drastic, there is, in fact, nothing to worry. As the project manager has already had a week’s time in the buffer before the handover, launch, and training. This training would be for the larger group of the end user.
The plan is to record the complete training session and then make it available online for anyone. The alpha code testing phase starts well but on the second day, there is a blast. The product doesn’t behave as per the requirement and some of the important test cases fail. In fact, it was more about handling exceptions. But any discrepancy at this level is serious for every stakeholder. But then, that is the reality of life. And this can happen because it has a probability to happen. There are certain test scenarios that are reported as a pass but in reality, it didn’t happen so. The product immediately comes back with a report to the development team for the necessary patchwork. New timelines come into place.
Code Testing has to be a fool-proof mechanism
As per plan within next three days, the patchwork and code testing have to happen so that the product comes back to the same group of few end users for alpha testing. Everything is under control though. At the same time, an internal team starts analyzing the cause of this failure. It is important for learning and avoids future failures in the similar stream. The team submits a report to QC and product managers with a recommendation to punish the tester who was responsible for this failure. Both the managers send back the report with two suggestions.
First, bring to the table a set of recommendations how to avoid such hiccups in future. Second, recommend a proper training mechanism to ensure QC engineers don’t fail in future. Knowing which QC engineer made mistakes and taking action against him or her doesn’t suffice the purpose. The purpose is to equip them more in terms of training, knowledge, practice, and confidence to perform error-free tasks in future. And thus code testing doesn’t leave any gaps like this.
Digital Transformation is a trending phenomenon on the global spectrum. Every organization is performing online transactions and other activities in one way or the other. In fact, it has become one of the key points during every boardroom discussion. Every department in an enterprise is working towards some innovative or creative digital solution to develop and deploy. And the ones which are already in place keep reviewing them for what better is possible. Internet and lowest possible data rates are responsible for a tremendous increase in customer expectations towards flawless digital experiences. So whether you are a customer, buyer, seller, employee, or employer, it has become your basic need without any doubt. Online transactions are increasing manifold on daily basis. Like, online bookings for travel, movies, shows, events, gas, courier etc. In addition, online payments, purchases, orders, etc. In fact, there is no end to it.
To cater to all these needs every business is working hard towards it. Obviously, for every development and deployment of any component of digital transformation, It department has to play a phenomenal role. In fact, acquiring business knowledge for them has become a very important factor. Because without the knowledge of business processes in depth, coding, testing, and deployment is impossible. User experience also is playing a major role. Any app that doesn’t impress users in terms of quality, speed, navigation, and comfort is prone to dump in a dustbin. In fact, even if the concept and purpose are good, such basic flaws in any app make is worthless. On top of it, it is an era of global competition.
Digital Transformation Is The Bew Sprint Track
The moment you have a new concept and you don’t develop and launch it fast, somebody else will take the lead. And if you are able to launch it in time, then keep engaged in new releases, updates, and features. Because even if you are able to launch a new concept first, it doesn’t guarantees to stay you ahead in the race. That is just the beginning of Digital Transformation.
Customer experience matters a lot in today’s world of high competition. Especially when it comes to Entertainment. I had a bad experience recently at PVR Cinemas. PVR is a premium chain of theatres in India. You get a luxury cinema experience in these theatres. No doubt, the costs of ticket and snacks inside are quite high. But then once a while you don’t mind visiting there with family for a good quality of entertainment. The service quality is good for snacks. You pay for snacks and tell them your seat number and auditorium number at the time of order. They ensure to deliver it to your seat the moment it gets ready. That gives you a royal experience, no doubt. That is the reason why people prefer going there for fun, food, and entertainment. PVR is a multi-screen or multi auditorium property. They are all across the country.
We had a good service at the ticket window. Getting tickets promptly and gracefully by the young staff adds value to the customer experience. Mine was auditorium 1. I was there with my family. There was some time for the show to start and the entry also was about to start in the auditorium. Hence, we decided to order some snacks. I paid for three dishes and told them my seat number and auditorium number. So far the customer experience is quite good. But nobody was aware that a small carelessness from the delivery staff would spoil the whole experience. So we enter the auditorium and take our seats. And soon the movie begins. After around 30 minutes a delivery boy reaches my seat and hands over the tray having our dishes. And that starts the moments of trouble. There were only two tissue papers while we were three.
Customer Experience Matters a Lot
Obviously, one of us had to consume the dish without having tissue paper. It was important because the dish was saucy and having a tissue paper was necessary. And that momentary small incident was bad enough to spoil my mood. That is probably a serious example of a bad customer experience at PVR Cinemas.
Mpesa is a mobile wallet service from Vodafone. Vodafone is one of the largest mobile operators in India. This service that they provide is similar to Paytm, Airtel money, and many others. In simple words, you have to download the app on your smartphone. Register yourself and then top up your account through your bank account, debit card, or credit card. That is as good as the other similar apps do. And then you can pay your bills, transfer money, or buy movie tickets etc. with the help of this app. Similarly, other apps above also do the same. Now, different apps have different offers. So in this app from Vodafone, when you top up INR 1000 from your bank or card, you get INR 50 as a bonus. So your net balance becomes INR 1050. Similarly, there is a different kind of offers for different purchases, spends, etc.
Surprisingly, there is a strange condition in Mpesa that is not there in other services like Paytm and Airtel Money. Why Vodafone is doing this customer unfriendly gesture is difficult o understand. The condition is if you don’t do any transaction in your account Vodafone deducts INR 50 from your Mpesa account. I don’t think any other digital or physical similar service has this condition. So either Vodafone is quite stringent in its policies or their customers don’t mind it. But, in my opinion, this is my money. So why should I allow them to deduct any amount because I was inactive for some period? Even if I was inactive, I have certain money lying on my account. And, in fact, even if there is no money paying in my account, how does it matter? But Vodafone had a different opinion on this but was not very clear or convincing.
So, when I raise a query somebody calls me from Vodafone office. I inquire why has there been a deduction from my Mpesa account, I am told that I didn’t transact for a period of last 6 months. When I ask which another similar service is doing this, there was no answer. And when I ask why are you doing it, they tell that since there is a tie-up with ICICI bank. But my point is why only Mpesa is doing it and no other similar service provider does it? Is it a customer friendly activity? Are you losing customers because of this? If yes, then you need to relook into this policy. In fact, there are more ways to attract customers for more transactions.
PMO or Project management office is a great way to control projects in an organization. The capabilities and scope of this mini organization within an enterprise should not limit to only software projects. In fact, every initiative that has a target of measurable results in terms of time, money, and improvement is a project. As a matter of fact, an initiative has no meaning if it has no targets. Rather, that would be a meaningless activity resulting in a waste of time, money, and efforts. Such activities are biggest wastes in an organization. So, logically all measurable initiatives should come under the lens of PMO thus controlling projects efficiently. These, in fact, include other than software projects even. The major activities of this office include defining and controlling project management methodologies. All projects can’t follow the same methodology. There are many factors to decide for a particular project type.
These factors include processes, documents, approvals, and other mechanisms. Hence similar kind of projects can be put in a single project type. Similarly, different projects, depending on their nature and various factors will land into different project types. Then, on the basis of a project type, we define a project methodology. Though all four stages of project management are important to follow but not necessarily in the same manner. For instance, a web project will have a different process to handle than an in-house development project. Similar will be the case for another kind of projects. Although the sequence of the four stages will remain same in all projects various steps within those stages will vary. PMO is responsible for all this.
Let Every Project Come Under PMO
In fact, all six sigma and kaizen projects also should work under PMO. In the same manner, all projects that HR, Marketing, Design or other departments must have PMO as a controlling agency in the organization. For instance acquiring an organizational certification like Oshas or ISO 14000, deployment of ISMS, or moving from SAP to Hana is again projects to come under PMO. Are we ready to look at PMO in a broader perspective?
An Interactive Voice Response or IVR System is a good tool to automate some business processes especially operations. Recently I started feeling that most of these IVR Systems are not intelligent and customer-centric. It might be that the configuration is not proper though the product might have ample capabilities in this regard. Even then it is a flaw at business end that needs an attention and correction. Let me take an example of one of the largest telecom operator in India, Airtel. I am a big fan of this company. Because I am using their broadband for more than a decade with a good amount of satisfaction on a consistent basis. In fact, social media has become a magical agent in creating higher bonding possibilities between a customer and business. Now, it all depends on how brave a business is to be present and responsive on social media platforms.
Like, if I talk about Airtel, they have a phenomenal presence on Twitter. And they are quite responsive. At least, that is my experience with them not once or twice but multiple times and different reasons. So, let me take a recent example. One fine morning when I wake up I find that the internet is not working. Though the router is on, its data LED is not blinking thus indicating that the data transmission is not happening. I reboot the router just in case there is an initialization issue, but the situation remains same. I call their support number for technical support and the IVR system begins. Since it was around 7.30 am probably their staff was not available to take a call but the system keeps telling me to hold on and some technical person will connect shortly. I waited for 15 minutes but to no avail.
An IVR System can be made much smarter
Those 15 minutes were quite irritating that resulted in a tweet to the company. Immediately I get a call. Someone at the other end politely informs that there is a breakdown in my locality and by 1.30 pm the internet will start working. Since I had an alternative backup plan in place, it didn’t affect me much. So I request the gentleman to note down a couple of suggestions regarding their IVR system. Firstly, when I called from my registered number, the system knows who I am and from which locality am I calling. Hence, it can intelligently tell me that there is a breakdown in my locality that will be rectified by such and such time.
Secondly, when there is no technical person on the seat, the system should not put me on hold. Rather there should be a mechanism to inform the customer and get a call back as soon as the technical person arrives.
Business Automation has become an important factor in business continuity. As a matter of fact, you can’t think of surviving in the changing scenario of business on a global canvas, if you are not thinking of automation. Mind it, automation has no end to it. It is not a single thread of a business. Think of automation in the whole entirety of business. In fact, it is important to look at the extent of automation of many businesses that are doing excellent. For instance, think of businesses like Uber Airbnb, or Ola that are completely automated. If you look at the complete cycle of customer operations, you find no manual intervention in the whole process. It is simple. And it is simply flawless. You don’t need any manual assistance during the whole process. In fact, everything is flawless and simple. That I call a great philosophy of business.
As a technology head of an enterprise, if you treat every stakeholder as your customer, then it makes life easier. Try giving them your best at all levels. And in turn, expect them to engage you in real business activities so that you get the crux of it to the deep root level. Then business automation becomes easier. In fact, the whole concept of automation is changing. So far even if you are able to automate one of the business threads or process, you would make the top brass happy. But actually, there is nothing to be happy about it. The reason is a partial automation is always a painful proposition. It makes your business vulnerable and error-prone, in fact. Because it will need more manual joints at both ends. By manual joints I mean it will require some manual intervention and feeding at both ends.
Business Automation has to be complete in all respects
Even most of the IVR systems in the name of business automation are getting useless these days. The reason for that is that those are not intelligent and are not customer centric. How? I will explain in my next article with a real life example. Stay tuned.
2018 will be the year of transformations, innovations, and evolutions. A lot of newer technologies will emerge. And at a faster pace. At the same time, things will evolve. Technology will not stay limited to the closed IT department. Technology 2018 is about to bring a lot of changes in the whole paradigm. As far as IT is concerned, it will expand its horizon in the whole enterprise. Not in terms of deployments but participation. In fact, every department in the organization will become an active stakeholder in organizational technology.
As a matter of act, suggestions from CIO/CTO will take a backseat. It will be other departments that will be suggesting technological changes they require. That too in a more concrete and crisp manner. And if this doesn’t happen in your organization, you are lagging behind in terms of technology. You need to check the overall balance in the organization.
Every enterprise has to have a balance between its IT department and other departments. Technology 2018 is balance in terms of technological collaborations. If requirements come from the departments, IT needs to get the right vendor and technology in place in order to cater to that need. CFO roles can’t stay limited to the financial part during technological deployments and vendor selection. Rather, it has to go beyond that. Even a CEO has to be a frontrunner in terms of what and why of technology in his or her company. Things will be moving at a very fast pace. Gone are the times when you deploy a technology and relax for next three to five years. In current scenario, it will not be a surprise if you have to call off a deployment in between becuase a lot of changes have happened in between the finaliztion of product and procurement.
Technology 2018 Will Bring a Sea Change
The rule of the game is longer deployments will be big losers. Shorter and modular deployments will be the ultimate winners. That is Technology 2018 mantra.
If you feel a successful implementation, handover, and signoff by the customer leads to project closure, then you are absolutely wrong. It might appear right on papers or in project management terminology but not in terms of business success. If you really want to measure the success of a project, it must be able to fetch the business goals in absolute terms post-deployment. Because only a successful business case can help you get further business from the same customer or other customers. Remember, that your existing customers also play a major role in spreading a word about your product and service. That could be about the success as well as failure. And, in fact, that matters a lot. A research paper says its impacts is on almost 50% on your business. an organization always looks for a feedback from an existing customer or product user prior to a new purchase.
There are many examples where the seller company declares a project as a successful implementation and the buyer company is yet to use the product to its fullest. As a matter of fact, the product or solution, sometimes, is as good as not in place. Because, either it doesn’t fit well in the environment or it is too complex to use. One more important point to remember in this regard is post-project support. When you are supporting a product after its successful deployment, you need to ensure 100% upkeep. In fact, even if the usage hiccups happen not because of and fault in the product but users. Rather, it is really critical to remove user’s hesitance in that regard so that they use the product fearlessly. Basically, if there are any faults still existing in the product, ensure to take them up as soon as possible and make it flawless.
Successful Implementation Needs Post-Project Attention
Even if a product is strong and has a successful implementation history, it needs attention. A lack in post-project support might call for a big trouble.
There are many business elements that decide the fate of a business. With the changing dynamics of business on a global front, it is important to understand what are the top business elements if set right will ensure a success in business. As a matter of fact, how do you measure the success of your business is also important. In my opinion, a consistent growth, low employee turnover, no loss of existing customers, and successful completion of running projects are some of the important factors. Also, it also depends on how to leverage technology to your business. Because if you are doing a lot of work manually that is possible through automation, then it is merely a waste of resource, time, and money. Don’t hesitate in making good investments in technology that always turn out to be gainers in long run. The mantra is assess wisely and deploy fast.
The world is moving to cloud. It helps in many ways. Firstly, it reduces your opex and capex burden. Secondly, it reduces your maintenance and upkeep efforts. Thirdly, it provides you latest and best in technology with a fraction of the investment. And finally, it helps you concentrate more on core pain areas thus making you a little more proactive in your approach. Now, it is important to understand what all you need to achieve it. Firstly, you need to have a right team in place. What you have and what you need might not be in right alignment. Don’t throw your people out but make them capable to shorten that gap. Help them in all possible ways to understand, train, and equip them with a right set of skills and vision. The second business element on top priority is business data. Ensure to systematize and automate it as much as possible.
In addition, to reduce paperwork and ambiguities, capture the data at its point of origin straight in the system. Next, one of the most important business elements is security. Whatever you build, launch, create, test, deploy, or sell, security feature has to be on the top.