Global digital infrastructure trends mainly include blockchain technology with a key focus on energy consumption and performance. On the other hand, enterprise networking is taking a new shift with the evolution of intent-based networking. This is a step ahead from software-defined networking. Cloud is impacting enterprises in a big way. To go for it or not to go, hyperconvergence, or stay with what you have as it is are the mind-boggling points for enterprises to take a firm decision. Obviously, any decision taken in this regard will have a long-term impact on the overall growth of the organization. The blockchain is currently in its nascent state. But it has already become a worry point for datacenter operators and vendors. IT professionals and datacenter operators can’t ignore the impact of blockchain technology in datacenters. It is interesting to see how this technology addresses the two important factors energy and performance.
In fact, it is high time to leverage blockchain technology by deploying it and extracting juice out of it. An important issue to analyze is its sustainability. And of course scalability. Power consumption is directly proportional to the size of the network. But to control the waste there is something known as consensus protocol. In fact, it is a set of distributed protocols that intelligently decide which transaction to execute and which not to. But that doesn’t compromise with the consistency and integrity of the blockchain irrespective of whatsoever is the number of distributed nodes. Another protocol that is related is proof of work or PoW protocol. Many blockchains like Ethereum, Bitcoin, etc. are already using it. This protocol is very intensely compute and energy-centric. For these properties of PoW, cryptocurrency mining is preferred to be a part of datacenter operations.
Blockchain technology will take some time to acquire maturity
A recent study about blockchain technology states more than 55% of Bitcoin nodes are in datacenters. The study was conducted was IC3, Cornell University, and Technion. It also reveals that around 30% of Ethereum nodes also are part of datacenters.
IT in enterprises always oscillates between keeping the lights on and finding new sources and ways. Where the focus stays more on keeping the lights on indicates not many innovations happening. That means the environment is more of firefighting and handholding than introducing and deploying new technologies. Between all this, digital economy and digital transformation is something that is taking place in every organization. After all growth and development are more important than managing the day to day operations. As a matter of fact, keeping lights is important to ensure smooth operations but it should not happen at the cost of high-level resources being wasted in mundane jobs. The IT budget of any organization can clearly indicate the overall health of technology in the organization. No focus on R&D, new technologies, and innovations clearly indicate lack of growth of technology in the organization.
Transformation should always be a leap ahead of maintenance. If there is a need for automation of key business processes, it should not stop because of unavailability of the budget. Mobility is another area that needs attention. There is no point in letting your technical debt increase on a regular basis. Rather the focus should be towards decreasing it as much as possible. If legacy systems are taking more and delivering less in comparison, it is high time to discard those systems. It is not wise to spend on 10 legacy applications performing different business tasks in piecemeal. And in turn consuming a large chunk of resources, time, effort, and money. The overall equation remains negative in that case raising an alarming situation for the health of the organization. The same is true for legacy infrastructure. After all their upkeep means recurring investment.
Top IT priorities for 2018 Has Security on the Top
Other top IT priorities include hyperconvergence, security, big data, analytics, Artificial intelligence, machine learning, etc. Obviously, one size doesn’t fit all. The priorities will shuffle according to the nature of the business and other key parameters.
On-premise datacenters differ from a public cloud in a big way. While the on-premise infrastructure environment is usually complex with slow hardware provisioning. The public cloud, on the other hand, is simple with rapid hardware provisioning. That is where Hyperconverged Infrastructure or HCI comes into the picture. Because the efficiency that one brings in is difficult to attain with the other model. Many enterprises who understand this gap in the two models is trying to speed up the IT transformation of their on-premise datacenters with the help of automation. Implementation of these automated management platforms definitely requires a different kind of infrastructure setup that supports it well. This, in fact, creates an environment that is similar to a cloud model.
The purpose of Hpyerconverged Infrastructure is to create an on-premise IT environment that is as good as a public cloud in terms of freedom, agility, and speed. The overall IT transformation process not only needs a different set of tools but also a large number of automation initiatives. An important factor to take care of is policy-driven automation that should be the ultimate goal of this whole exercise. The best way is to set HCI as the base that drives environment-wide automation. As a matter of fact, if you notice closely, automation has already become a regular phenomenon in many organizations for their IT environments. In fact, discovery and orchestration tools are becoming an integral part of a datacenter management system. These tools function on the policy-based resource allocation and management. That is why automated management tools like Puppet, Chef, SaltStack, and Ansible are becoming popular.
Hyperconverged Infrastructure needs a strong IT transformation strategy
While having automation tools is the prime requirement for Hyperconverged Infrastructure. There has to be a suitable It transformation strategy in place before anything else in this regard. Because the overall goal is to bring simple and accelerated environment.
The two industry segments that will witness maximum utilization of Industrial IoT (Internet of Things) as per the latest report IAMAI-Deloitte report released today during ‘IoT for Smart India’ summit held in New Delhi. The title of the report is ‘Demystifying IoT for Digital Transformation’. Another key outcome of this report is about Industrial IoT surpassing the consumer IoT spectrum in India by 2020. A significant target set by the Department of Electronics and Information Technology (DeiTY) in their draft IoT policy is to take the IoT industry in India to $15 bn by the year 2020. If India is able to achieve it, it would be having around 6% share of the global IoT industry. IAMAI stands for Internet and Mobile Association of India. There is a significant adoption of IoT in India because of some of the new initiatives like Digital India and Smart Cities Mission.
Industries across all segments are doing their best to take the maximum leverage of technology for growth. That, in turn, shows a steep rise in adoption of Industrial IoT projects. Most of these projects are taking place in Energy and Utilities, Transport, Logistics, Industrial Manufacturing, and Agriculture verticals. One of the major drawbacks is these projects are taking place in isolation thereby increasing the risk of repetition of efforts and energies, and wastage of money at the same time. A universal collaboration of vendors putting staggered efforts in these projects will not only ensure a tremendous decrease in the above risks but also help in gaining higher success and fast results.
Industrial IoT needs a high-level of collaboration among vendors and projects
Bikram Bedi, Head of India and SAARC, Amazon Internet Services says,
“Internet of Things (IoT) as a technology is receiving tremendous attention for the transformative potential it presents. By connecting the physical and digital worlds, IoT vastly expands the reach of information technology and throws up a myriad of possibilities given the ability to monitor and control things in the physical world electronically, and the availability of previously inaccessible data. IoT applications are being deployed across a wide range of use cases including utilities, transportation, agriculture, healthcare, manufacturing, retail, connected vehicles, connected homes and many more. Given the transformative potential and the significant economic impact IoT can drive for a country like India, IAMAI together with the industry, has launched a concerted effort towards catalyzing the IoT ecosystem in the country.”
Harmeen Mehta, Co-Chair, IoT Committee, IAMAI and Global CIO, Bharti Airtel adds,
“IoT is all set to truly transform and enrich our lives with its digital solutions and innumerous possibilities. India is well positioned to leverage the power of IoT to create massive growth opportunities in the country. At IAMAI, we are fully committed to contributing to this journey and are working closely with the Govt and relevant industry stakeholders to build a vibrant ecosystem that demystifies IoT and works towards developing policy, standards & best practices for IoT connectivity, device protocols, security, mass scale production and cost-effectiveness.”
Digital transformation and skill shortages go hand in hand. Digital transformation involves high engagement of digital technology. IT needs to be an innovative partner in adopting and streamlining these technologies for the organization. Innovations must impact all business verticals with an overall goal of enhancing the customer experience. To improve customer experience, you need to improve overall operational efficiencies. For this, it is important to learn how to find the inefficiencies in the complete ecosystem. Once you are able to achieve that, it will definitely help in improving business agility. The overall impact is a better management of business risk. That is the sole purpose of digital transformation in any organization. First and foremost is to keep it ahead of the competition. This demands a number of mindful strategic alliances. These alliances need to happen both internally and externally. It is not wise to keep all expertise in-house.
Progressive organizations face more heat for having perfect staffing. More so digital transformation and skill shortages are a common phenomenon. On the other hand, progressive businesses have the skill to learn and unlearn things faster than others. For instance, cloud adoption across an enterprise calls for a number of disruptions. Staffing is one of the biggest challenges. It happens because, with the changing scenarios, the organization needs new skillsets. Retraining is one option but that is a time-consuming exercise. Usually, this kind of projects happens on a fast track. Organizations want to see the immediate results. But they fail to understand any transformation comes along with a certain set of pain points. The speed of deployment is directly proportional to the appetite to bear the pain. Contrarily, organizations not serious about digital transformation don’t have such kind of IT skill crunch.
Digital transformation and skill shortages create a lot of opportunities
Organizations having a strategic mindset know about the pain of digital transformation and skill shortages. They, in fact, try to find out innovative ways to manage this situation before the pain turns severe.
Even today the majority of the organization’s IT department is busy handling day-to-day operations and upkeep tasks. On the other hand, enterprises are craving for digitalization and thus seeking more strategic alliances with their IT cells. So basically, the change in demand for IT is from tactical to strategic. This is, in fact, becoming a necessity for IT of any enterprise. As a matter of fact, enterprises are changing their strategy towards the kind of people to hire in IT. Earlier it was more of generalists and less of specialists. Now, the new paradigm is to hire more specialists and outsource all generic kind of jobs that require generalists. This, in turn, is helping the organizations to shift to opex model from capex model. This is helping organizations to create a more flexible and adaptable environment. This transformation is helping in reducing the risk of getting left behind.
Strategic Alliances is the top key for Digital Transformation
Usually, larger and older organizations are carrying bigger technical debt. This technical debt is not only in terms of hardware and software but manpower too. A huge pool of legacy systems, about which I discussed in detail in my previous post doesn’t let them easily integrate with their modern systems. Organizations that are understanding this need for greater strategic alliances not only with IT but all other departments who are the key stakeholders for carrying out the digital transformation. Happily or painfully, organizations have to adopt this methodology. As I say often, the first and foremost task is to transform from a tactical to a strategic alliance model. Scrutiny of legacy softwares and infrastructure is very important to sort out the level of burden they are impacting on the overall mechanism. If businesses have to survive in the current scenario, digital transformation is a must.
No digital transformation is possible without forming strategic alliances with IT. For that, IT needs to be an equal strategic partner in the business.
Working in an enterprise as CIO/CTO never goes without buying a software application for one or the other purpose. On the same grounds, a lot of in-house coding keeps happening through the coders and developers onboard. It is quite interesting to know how unknowingly and swiftly many of these become technical debt for an enterprise. Let us see how. Over a period of time, these applications become a pool of legacy for the organization. Many out of these go out of use or are left with a partial use. Despite all that the upkeep of all these bouquet of codes remains the responsibility of IT department of the organization. What is means is that there comes an invisible elephant eating a big chunk of energy and efforts without getting some significant return. There are many reasons for this technical debt piling up silently and effortlessly.
Let me give you an example. Around 23 years back I created a payroll application in one of my early organizations. This was having a lot of complications and obviously it took a lot of time to develop and establish. after working there for 10 years, I joined another organization and then another. While working in this third organization, I get a request from my first organization that there is some issue in the code of the application that I developed and they need my help to fix it. By this time the application was almost a decade old but still in use. In fact, during this period, the home-grown ERP that we had during my tenure was replaced with an international brand ERP. But for payroll, they were still banking on the same application. That, in fact, was quite surprising for me too.
Technical Debt Appears Silently and Without Any Alarm
On enquiring, the CFO tells that the mainstream ERP of international brand was not able to cater to their needs for their payroll requirements. The cost to develop the whole thing again in-house was not possible for them. Getting a new piece of code from a vendor was incurring a huge cost. That is why they decided to carry on with the same age-old legacy payroll application. In my opinion, it was now a sleeping volcano that could erupt any moment thus creating a huge technical debt for them.
If enterprises find a suitable, stable, and reliable monitoring tool for their production applications, they would not mind shifting to serverless architecture at a faster pace. The adoption would become easier and quicker, in that case. The biggest challenge of visibility in serverless environments is monitoring. There are a number of vendors offering serverless monitoring services and capabilities. These include SignalFX, Datadog, New Relic, etc. It was, in fact, AWS Lambda creating a new concept of serverless architecture. The concept was new though but quite interesting offering function as a service (FaaS). As a matter of fact, serverless means the organization doesn’t need a provision of servers. That doesn’t mean servers are not there in the picture. They are, in fact. But the organization doesn’t need to manage servers. This is quite interesting. Isn’t it? Then who handles server management? Who ensures scaling at the right juncture?
Serverless Architecture, in fact, involves a metering mechanism thereby charging users on the basis of certain parameters. These parameters could be the time of code execution and the number of times a code triggers. That makes serverless monitoring quite interesting. Is it costly? Let’s see. Many organizations are already moving from onsite data centers to serverless architecture. That avoids them bothering about containers or even virtual machines. While AWS was the pioneer in creating serverless technology, there are other players now like Google Cloud Platform and Microsoft Azure. Serverless model comes with certain benefits. These include an improvement in code quality, improvement in developer productivity, cost saving, scalability, to name a few. One of the biggest complaint that comes from this technology’s users is lack of visibility into their servers. Serverless environment demands a different monitoring mechanism. Normal APM (application performance monitoring) and IM (infrastructure monitoring) systems don’t suffice the purpose.
Serverless Monitoring Doesn’t Gel With Traditional Systems
In the nutshell, we can say that serverless computing is currently in a nascent stage and undergoing various experimenting from vendors to attain a substantial reduction in overhead. Some more startups in this field are IOpipe in Seattle, Dashbird in Estonia/San Francisco, OpenGenie (Thundra) in Falls Church, Epsagon in Tel Aviv, and Stackery in Portland. In fact, it will be interesting to watch biggies like Amazon’s next moves in the field of serverless monitoring.
What are the enterprise datacenter preferences worldwide? Is public cloud gaining momentum among enterprises? Well, a number of studies and statistics say the shift is happening but at a slower pace. While the centralized datacenter and core business apps still remain within the boundaries, the local datacenters are reducing. One of the key reasons for this could be a shift towards hosted solutions by the organizations. Obviously, embracing public cloud cuts down your investments especially the capital investments. Organizations are moving to public cloud platforms rather than investing in IT Infrastructure. As a matter of fact, any addition in existing infrastructure not only eats a major chunk of your annual budget but also increases your recurring expenses in terms of their upkeep and maintenance. Organizations are preferring to reduce these costs and enhance their operational performance. Of course, the amount of effort largely depends on certain factors.
For larger organizations, it becomes more challenging to migrate to the public cloud. On the other hand, smaller organizations can migrate their workloads easily. Basically, it depends on the volume of data and the complexity of databases and applications. The simple applications and databases are easier to move. Despite such hurdles, it is interesting to see public cloud gaining momentum among larger enterprises. It is interesting for organizations to study how this shift impacts their IT environment usage and workload. Existing IT infrastructure and assets become a worry point for organizations while taking a call to move to public cloud. An increase in cloud service providers clearly states the mood and trends. In many small organizations, in fact, server rooms and local datacenters have vanished. Noticing this trend and success, even larger organizations are now thinking of moving more workloads to the public cloud.
Public Cloud Gaining Momentum But Slowly
Traditional on-premise deployments are decreasing at the cost of public cloud gaining momentum. Colocation is also becoming a favorite choice, especially for mid-level organizations. That way they are able to consolidate their infrastructure and datacenters. Overall, in-house IT footprints are decreasing across the organizations of all sizes and geographies.
This post is in continuation of my previous two posts. The agenda of these posts is to highlight how enterprises can leverage machine learning in various segments thus enhancing their business decisions. In my first post, we discussed How Machine Learning Transforms Customer Experience in CRM? Similarly, in the next post, we talked about How To Use Machine Learning In Supply Chain Analytics? In this post, we will discuss few more important use cases that are applicable in most of the enterprises. So, let us start with few more machine learning use cases for enterprises. Next use case that comes to my mind is Data Analytics. In fact, it is the first use case I think that originated as soon as machine learning came into existence. The good point here is that it can easily handle unstructured data thus making analytics more meaningful with wider coverage of relevant data.
When we talk about machine learning use cases for enterprises, it is analytics that becomes a foremost priority. The reason for that is coverage of wider datasets and capability of having predictive models while embracing unstructured data. It can, in fact, result in prescriptive analytics. The real beauty is letting it used by those who are not data scientists. Thus the real power comes into the hands of business people who need to take in-time decisions that are business critical. Next use case that we can discuss here is HCM i.e. Human Capital Management. Machine learning is already impacting or rather empowering HR specialists with recruitment, development, training, growth, measurement, and retention of employees. There has been, in fact, a radical shift in recruitment in terms of the way job-finding sites function as well as recruiters and organizations identifying most suitable candidates.
Machine Learning Use Cases for Enterprises Are Helping Them In A Big Way
The next class of machine learning use cases for enterprises comprises of Information Security. Through the application of analytics, machine learning is enhancing information security for various issues, detection, alerts, correction, and so on. With the increasing size of end users especially in large organizations, it is impossible for IT department to check security even logs manually. That is where this technology becomes handy. Machine learning is helping a lot in understanding user behavior, identifying risks and vulnerabilities, mitigating risks without manual intervention, and proactively taking appropriate action against external threats.
When we think about using machine learning in supply chain analytics a lot of ideas would come to your mind. There is an important thing to keep in mind before heading towards any conclusion in this regard. As we all know some components of supply chain management talk in analog terms. More so, there are certain things that you would still be performing with pens and clipboards. But there is a brighter side to it. The other parts of it like autonomous trucks, drones, analytics, driverless cars, etc. are using the latest technology. Let us keep our focus on analytics part for now. Some of the prominent issues that we use in supply chain management are common in B2C and B2B segments. Like, delivering same-day. Be it service or product. The world is becoming more and more demanding day by day. The boundaries between B2B and B2C are fading.
Every business and customer expect 24×7 which means businesses need to be ‘always on’. Customers prefer to get personalized information. It is already on the verge of on-demand and real-time. There are amazing outcomes possible while using machine learning in supply chain analytics. It works more efficiently in case of organizations having trouble in handling scales. Like, when it comes to managing a huge number of stocking locations. Forecasting is another area where businesses require a high level of accuracy and the information should come in real-time. Gone are the times of weekly or monthly forecasts. It is now daily or intra-daily forecasting that is in demand looking at the changing scenarios of businesses. To keep getting the data in real time, point-of-sale (POS) systems need to be in place and integrated well with the centralized system. Security is a big concern.
Machine Learning in Supply Chain Analytics Can Create Wonders
There needs to be a mechanism for identifying and alerting against fraud and theft. Another important area is anticipating unusual events in advance. There comes the need of integrating with big data sources like the weather system. All this is not possible without encashing the benefits of latest technologies. You need to use machine learning in supply chain analytics to achieve all these goals. One of the best business use in this regard is UPS’s ORION (On-Road Integrated Optimization and Navigation) system. The system is working well for more than a decade now. It helps UPS drivers find the best possible route with the aid of GPS systems. There are many other things that only machine learning can handle like dynamic pricing, online customer handling on social media, fraud detection, and defect detection. That is not all. These are just a few of the pointers.
In today’s environment to take the complete leverage of machine learning in supply chain analytics, it is important to use technologies and tools like image recognition, social media analytics, video analytics, and integrating with relevant information aggregators. Ultimately the goal is to gain the advantage with the help of technology to take your business to next level of competition.
Every success in any business has only one factor behind it. That is Customer Experience. Let us see how it impacts in CRM. The foremost goal of any CRM is to provide a 360-degree view of a customer. And thus create a great customer experience in CRM. That is the one factor the whole CRM vertical is striving for. Whether it is customer side or the vendor side. The 360-degree customer view was lacking until the early 2000s. Why it was not possible or thought of because the primary focus at that time was on transactional data. That data that was residing in various formats in databases.
At that time it was touching only the structured data. All unstructured data was being ignored or was treated as useless. This unstructured data used to have most of the valuable customer transactions like communications, phone calls, emails, and social media posts. Though social media posts at that time were too less. It was more or less a partial analysis of customer experience in CRM.
Discarding all this data could result in only partial customer experience in CRM. Because such kind of data was not at all analyzed because of the drawback of the technology in use at that time. Machine Learning is now able to give a major thrust to 360-degree customer view because of its ability to analyze huge sizes of disparate data routing from various sources. In fact, it does not matter if it is structured or not. Basically, with the evolution in experience, the experts have been able to define four prominent stages in customer analytics. These are Acquire, Serve, Nurture, and Grow. Let’s see how machine learning is playing a major role in each of these stages. When we talk about the ‘Acquire’ stage, machine learning based use cases would include micro-segment activities on the prospects thereby improving the level of accuracy.
Customer Experience in CRM Is A Constant Evolving Journey
Similarly, during ‘Serve’ stage, machine learning has an ability to create an intelligent chatbot or virtual assistant for customer self-service. That itself simplifies many complex processes and makes things simpler for customers. Customers, in fact, find a lot of value in it. Optimizing average hold time, taking standard requests without the involvement of customer service representative, and delivering faster are some of the gains for the customer. Machine learning helps a lot in the ‘Nurture’ stage thus transforming customer experience in CRM in a big way. it manages the process of customer interactions in such a way that the annoyance factor goes off and satisfaction level goes high. As a matter of fact, it removes all the customer friction points. Finally, in ‘Grow’ stage, machine learning optimizes and customizes by providing best suitable offers. It enhances conversion rate and profitability.
Nearly 50% of businesses currently use IaaS (Infrastructure as a Service). Around 12-15% of growth is estimated for the next 12 months. Doesn’t it clearly indicate a majority of businesses will have moved to public clouds in the next year? Do enterprises fear the public cloud? Or they have a valid reason and a high amount of clarity to resist it? Are they really resisting it? Cloud services include SaaS, IaaS, and private cloud. Most of the organizations are using it in one or the other form at least for one of their business applications catering to at least one of their critical functions. Also, a study says the majority of investment in hosted infrastructure is happening to public clouds (IaaS). Still, it is far from popularity or wider acceptance. What could be the reason for this? There is a hidden war going on between vendors and enterprises.
There are in fact different kind of scenarios. There are more than 40% of organizations that are not using the public cloud. They neither intend to in the near future. On the other hand, there are organizations that intend to adopt it but are slow to adapt to the cloud. The third segment is of organizations that rely completely on the private cloud which could be hosted or in-house. Cost and security are the two key glitches that businesses have in mind when it comes to adoption of public clouds. Also, it asks for a huge transformation for which it seems they are not ready mentally. It could be because of attributes of their organizations or their IT setups. Still, some prominent features have emerged. For instance, there is a straight connection between the size of the organization and its IaaS adaptability.
Public Cloud adoption is far from expectations
On the other hand, this trait takes a reverse sweep when it comes to the age of the business. The companies that are under five years of existence have highest rates of IaaS adoption. While the older businesses are late and slow adopters. These could all lead to public cloud resistance from various perspectives.
Smart Visualization is not the only way of making AI transform BI in a big way. There are other ways too. Before coming to those, let us discuss smart visualization a little more. As we understand from my previous post, it helps in eliminating the gap between experts and non-experts. That means it helps in getting better business results by actually involving and engaging business experts who are not too tech savvy and don’t use any query languages. Therefore, they henceforth don’t need any tech assistants in boardrooms and other top-level meetings to run some critical analytics that helps them in making crucial business decisions in time. It, in fact, makes machine learning to suggest the selection of right graphic for the right query thus making AI much easier. Another important tool is embedding AI into Data Storytelling. It happens with the help of NLG technology.
AI is able to make data storytelling a powerful tool through the integration of NLG technology. NLG, in fact, makes storytelling more narrative-driven. It happens through telling narratives employing data in preparing visualizations and business dashboards. NLG, as a matter of fact, generates words and sentences from data using NLP. It is definitely quite an interesting part to understand how it happens. There are a number of recent business case studies having integration of NLG into dashboards thus enhancing data storytelling. This, in fact, provides critical business insights that are not easily understandable in numbers and graphics. It involves usage of sentences in natural language thus making it more meaningful by providing additional context and understanding. That altogether gives a different meaning to visualizations, reports, and metrics in dashboards. The whole purpose is to make them easier to comprehend.
Data Storytelling Is Evolving At A Faster Pace
That is one of the reasons for the fast growth of integration of NLG into dashboards. This integration of NLG into dashboards not only makes data storytelling easier but it also makes it easier to query the data with the help of NLP algorithms in order to tell the story in a comprehensive and impressive manner.
AI is transforming business intelligence in a big way. The era of pilots and POCs is over. It is the action time now. The things are happening in production now. As we all know Artificial Intelligence i.e. AI is a combination of a number of technologies like Machine Learning (ML), Natural Language Processing (NLP), and Natural Language Generation (NLG). In fact, this integration of AI with BI is creating wonders. What it does is it makes analytics more crisp and user-friendly. This will lead to BI becoming accessible to masses (or the non-experts) thus speeding up the process of its adoption. As a matter of fact, organizations are becoming data-driven that can help greatly in decision-making. The real catch is to make BI so friendly for business users who are not data analytics or BI professionals that they get the real benefit out of it.
There is something called Smart Visualization. It is visualizing the data smartly. It works with the help of Visual Query Technology. This technology helps in the visual analysis by means of graphically answering BI queries in charts, graphs, and other visual forms. That, in turn, makes analytics faster to perform and easier to operate. This, definitely, removes the hurdles to adoption by removing the requirement of writing queries in code. That was one of the reasons that BI despite being a powerful tool was out of reach of users having no SQL or other programming skills. The same kind of skill of introducing machine learning into smart visualization is emerging as a smart move to apply AI to BI thus removing the wide knowledge gap between experts and non-experts. This is how AI is Transforming Business Intelligence In A Big Way.
AI is Transforming Business Intelligence through Smart Visualization
Smart Visualization, in fact, is enabling users to create powerful dashboards with impressive infographics. The users in case need not necessarily be having deep data analytical skills. That is a wonderful way of AI transforming Business Intelligence and making it accessible easily and effectively.
With the changing trends of Converged Infrastructure and Server Trends, it is interesting to see the shuffle taking place across the globe in terms of on-premise vs off-premise. IT infrastructure is transforming in a big way. So are the perceptions of today’s IT managers. In fact, for many organizations, the decisions between the on-prem and off-prem are far away from clarity. On the other hand, there are many who are changing their IT strategies to make more room for going in favor of off-prem thus reaping benefits out of it. Organizations are still facing troubles for moving ahead in the direction of complete orchestration and automation. It is happening either due to lack of in-house knowledge, business direction, or finding a right vendor to achieve it. And this is happening at the organizations having the most skilled IT teams with no shortage of funds.
One thing is true when it comes to On-Premise vs Off-Premise decision. A small progress in favor of the latter promises to return tangible infrastructure-provisioning profits. The role of servers and converged infrastructure is changing drastically. The key factors impacting it are hyper-convergence, workload balancing, and containerization. Basically, it is all about right-sizing for the sake of future. While many organizations are sure that the existing infrastructure in place is more than enough to cater to their future needs. All it will need is a little expansion and tweaking but no major replacements. On the other hand, there are very large sized enterprises that are worried about a wide gap between what they have currently and their future needs. Hyperconverged, in fact, is becoming a core need of an organization that earlier was having a minor role to play. This is creating major changes in data centers.
Things are Clearer for On-Premise vs Off-Premise
Similarly, a large number of organizations are having or talking about containers. They believe that container technology has an inherent ability to fasten application provisioning times. These are my thoughts on the current trends of On-Premise vs Off-Premise.
This is my concluding post on Top Security Concerns For 2018-2019, a 3-posts series. As we see Encryption, SOC and Mobility remain the top concerns in this regard. In my last post, we were talking about the increasing trends of mobility of employees of an organization. Thus, automatically, a new demand arises from tackling staff mobility between various network environments. The users need to access a number of applications and services lying in a heterogeneous environment. Some of these lie on-premise while the other are residing on the cloud. With a tremendous increase in encrypted tunnels, it is becoming difficult to manage the whole ecosystem. Thus, the new equation is to have a complete visibility and control of the endpoints. The strategies and points of importance are changing shape at a faster pace. Endpoint Security Functionality is, in fact, is a vast area of work.
As a matter of fact, endpoint security begins with telemetry collection for the purpose of analysis. And it goes up to complete lifecycle security, inspection, detection, and real-time response. The toughest task is to integrate all this on a single platform. This brings in a new layer of vendors with a complete focus on endpoint security. These include CrowdStrike, SentinelOne, Carbon Black, ESET, Endgame, Cybereason, and Cylance. IoT security and connected devices are the next big thing when we talk about major security concerns. More sockets, more endpoints, more devices, and more coding automatically pitches in more scope of vulnerabilities and threats to any enterprise. In this context, in the recently concluded RSA Conference in San Francisco, Microsoft launches Azure Sphere. Azure Sphere is a new security platform that focuses on protecting any kind of embedded devices in a smart manner. This is the need of the hour.
The scope of Endpoint Security Has Increased Tremendously
Azure Sphere is a combination of hardware and software. It consists of secure microcontrollers providing hardware-based Root of Trust to ensure a secure boot. In fact, it also includes cryptographic authentication and a complete protection of device communications. Surprisingly, this is the first non-Window OS from Microsoft. The OS is a custom Linux Kernel. This is because of two reasons – speed and security. Another surprising move by Microsoft is enabling Azure Sphere to run on any cloud. Not limiting it to just Azure. Azure Sphere aims to secure the whole IoT ecosystem right from the component level to the cloud. In fact, there will be more to see from other vendors soon. But all vendors realize the need for endpoint security and other security concerns that we discussed in these three posts.
This post is in continuation to my previous post on Top Security Concerns for 2018-2019. The key security concerns include Encryption, SOC, and Mobility. In this new concept of ‘encryption-in-use’, vendors use a number of techniques to tackle the encryption issue. These techniques include homomorphic encryption, secure multi-party compute, and secure enclaves. With the help of these techniques, vendors allow access to data for various purposes without the need for decryption. In fact, these same technologies are now used in cryptographic key management. With this, it enables the security of hardware-based key management with the help of software thus providing a higher level of flexibility and adaptability and lowering of cost. The same kind of transformation in technology is taking place in SOC. There is a severe need for the SOC of the future. There are distinct guidelines for this.
While the traditional SOC works in SIEM model. It stores event logs and alerts. These events logs and alerts from the traditional SOC aim to feed analytics engines, guide investigation teams, drive SAO processes, satisfy search requests, and interface with custom scripts. But that is not enough to tackle the current situations and security risks. The new methods include using traces in network communications in order to identify attacks that are happening in real time. The focus now is more on incident detection, exceptions reporting, and response activity. This new array of vendors include FireEye, Awake Security, Palo Alto Networks, ExtraHop, Gigamon, Darktrace, Corelight, and Vectra Networks. These newer technology vendors are becoming an integral part of SIEM deployments. Next comes Endpoint Security. This was, in fact, one of the most discussed topic at the RSA Conference.
Encryption, SOC, and Mobility remain on top of the security concerns
As we all know employee mobility is an increasing trend worldwide. And it is a point of concern from a security point of view. Almost 60% of employees in any organization demand mobility between network environments. Because they need to access a number of on-premise and cloud services through one or the other secured or encrypted tunnels. As a matter of fact, all this is becoming difficult to manage and inspect. As we see Encryption, SOC, and Mobility are changing the whole concept of security.
Finally, we shall be concluding Top Security Concerns For 2018-2019 in the next post.
There were more than 600 exhibitors in April at this year’s RSA Conference in San Francisco. The attendance was almost touching 50,000. There are some very prominent key cybersecurity concerns that will keep IT managers on toes for next couple of years, at least. Those in the Asia Pacific and Japan can register for the upcoming RSA Conference 2018 Asia Pacific & Japan. The location is Marina Bay Sands, Singapore and the dates are 25 Jul 2018 – 27 Jul 2018. More than 65 speakers will be enlightening the attendees on different aspects of cyber security including the top security concerns for 2018-2019. The zero-trust philosophy is strengthening its roots in this field. It is about discarding your orthodox concepts like earlier security model. The legacy model says trust anybody inside the premise and untrust everybody that is outside your perimeter. The whole architecture is faulty and risk-prone.
Hence, the new concept is to trust no one. That is probably the right approach to tackle Top Security Concerns For 2018-2019. In fact, zero-trust is based on a new framework that we call as reference framework or reference architecture. It is, as a matter of fact, independent of technology in place. There is no logic in granting access to resources like servers, applications, networks, and devices to everyone inside the perimeter. Rather the enterprises need to change their perception about security policies. On the other hand, zero trust concept includes all vendors including MFA, IDaaS, Network Security, SSD-WAN, and CDN service providers. In fact, Encryption is also becoming a challenge for security experts. You can keep the whole path secure that is carrying the encrypted data. What about the security at the point of decryption? That itself is highly vulnerable to attack from inside as well as outside.
Top Security Concerns For 2018 Calls To Trust No One
The point of encryption to decryption and decryption to encryption is open for attackers possessing compromised credentials. The same is also open for attackers with malicious intentions sitting inside. To counter this risk there is a new concept ‘encryption-in-use’ and virtual HSMs (VHSMs). VHM is a software suite that stores secret data outside the virtualized application environment. The key vendors for this new technology include Baffle, Enveil, Fortanix, Unbound, Inpher, and PreVeil.
We shall continue about Top Security Concerns For 2018-2019 in my next post.
Machine Learning has been in Practice now quite some time. Enterprises are adopting it fast to leverage its power. It definitely helps in excelling in business and stay ahead in the competition. Machine Learning, as we all know, has a tremendous power to automate and optimize any simple or complex business process. There are a number of machine learning use cases that we can pick from enterprise world. Enterprises either are already working on it or have immediate plans to deploy it. And those who are still away from it will feel the brunt sooner or later. It is always better to identify a critical business issue and then work towards addressing it through machine learning. Machine learning deployment can enhance business in many ways. It can create an artificial intelligence spectrum to help in critical business decisions. It also helps in automation of business processes, analytics, and operations.
As a matter of fact, machine learning use cases can derive out from many areas of the business. Like, any business task that is repetitive and/or mundane in nature in one such prominent area. Another area to look at is the activities that involve a high amount of risk or danger. In fact, it also helps a lot in quality improvements and tackling operational issues. Obviously, machine-learning software is to help humans and not replace them in the job. That is why it is wise to automate most of your low-level tasks so that human mind can concentrate on more complex tasks. Finally, it is going to be a man-machine combination to manage any kind of business. There are three valuable components of the business that play a major role in machine learning developments. These are data (both input and output), model, and algorithm.
Machine Learning Use Cases Rely On Data, Algorithm, and Model
Data, in fact, is the most valuable asset of any business. And that is the backbone of all machine learning use cases. Data is the real driver of business and business decisions.
The Generative Adversarial Networks (GAN) is, in fact, never a single network. It is a set of networks, at least two, operating at the same place but working against each other. Each of the networks brings its own unique set of results. For instance, in GAN approach, the first network creates realistic images, while the second one identifies whether those are real or not. It is like the first network is synthesizing something and the second one is monitoring its operations and controls what it creates. As the time passes, the second network trains the first one how to create unreal images in such a perfect manner that nobody is able to make out those are fake or unreal.
That means the fake images now the first network produces are as good as the real ones. In fact, you won’t be able to distinguish the fake ones. It is, rather impossible to differentiate between the two. That is the purpose of the Generative Adversarial Networks. Now, think about its applicability and usage. Which business or industry would need such kind of technology and for what purpose? For that, let us think about a few use cases. As a matter of fact, there are a plenty of use cases and many are already in production and operation. The first use case could be creating fake but realistic healthcare data. The purpose of such records is to train various models of machine-learning using different algorithms. But in this case, since you are not using real data, there is no infringement of patient’s privacy.
There are various use cases of Generative Adversarial Networks
In fact, Generative Adversarial Networks is a classical approach that we use in machine learning technology. Another use case you can think of is creating fake malware in order to test an anti-malware application. As a matter of fact, there are plenty of projects that are operational in the field of fake news videos and fake images of celebrities and famous personalities. If you look at the approach it follows, it simply matches with the unsupervised machine learning. But from another point of view, we also find adult supervision in it. Hence, would you call it an advanced version of unsupervised machine learning or a mix of supervised and unsupervised machine learning?
Reinforcement Learning is an important category of machine learning algorithms. It has a very classical connection with theories of behavioral psychology. There we learn about a reinforcement learning environment. The whole game here is of this environment training algorithms along with a deep sight on their performance. And on the basis of performance, it rewards or punishes. Before going further, let us go a few posts back and understand the background of Machine Learning better. In one of my previous posts, we understand how important it is in the current cutting edge environment for enterprises to adopt Machine Learning. On one hand, the businesses are having tougher conditions. On the other hand, technology is leveraging a lot of scopes to enhance and excel against tough competitions. This is an era where every disruption is an opportunity.
We also learned in one of the previous posts a straight connection between Deep Learning, Machine Learning and AI (Artificial Intelligence). Along with, we learned Supportive and Unsupportive Machine Learning Types, the difference between training and inference, and the relationship between datasets, algorithms, inputs, and outputs. Hence, before going further it is important to read those two posts and the previous one on Unsupportive Machine Learning and its examples. Now, coming back to Reinforcement Learning. As we see that it works on the philosophy of rewards and punishments. What happens is that during each step of the training process, the learning algorithm selects one of the observations and a suitable reward from a pool of possible actions. The model then keeps on accumulating positive or negative feedbacks by running the same process repeatedly. In fact, all this happens in a very dynamic environment.
Reinforcement Learning Works On Rewards and Punishments
Since the algorithm of Reinforcement Learning aims to collect the maximum possible rewards to enhance its next decision. In fact, it works quite intelligently. It can sacrifice short-term gains if it perceives long-term gains in lieu of them. This technology works best in gaming, robotics, and telecommunications.
In my previous post, we learned about Machine Learning and Supervised Machine Learning. Carrying it further, in this post, we will learn about Unsupervised Machine Learning and its uses. Machine Learning is a subset of Artificial Intelligence. Supervised Machine Learning has two important steps – Training and Inference. Inference happens after the completion of Training. The whole mechanism works on some algorithm and data sets. In supervised machine learning has an altogether algorithm and datasets. Its main use is for future predictions on the basis of future data. While it works on an imaginary situation that is yet to take a real shape, the prediction process works in a near-to-perfect state. That is the beauty of Machine Learning and its various applications.
Unsupervised Machine Learning is applicable in the absence or lack of training sets. In such a situation you don’t have any idea what will be the shape of the output. unlike Supervised Machine Learning, here all the input data is unlabelled or unstructured. The soul of the goal lies somewhere inherent in the data. While in previous category of machine learning we are trying to predict future with future data. In this case, we are trying to predict present but without any labeled or structured data. The algorithm, again, has to play an important role to draw out a meaningful result and output.
Unsupervised Machine Learning Works With Unlabeled Input Data
Unsupervised Machine Learning algorithms fall into two main categories. First one is Clustering. Its use is to find out hidden patterns or grouping in data. The second one is Association. Its application is to find out rules that explain parts of the data. Like, people like to go to this place also that place. Example of unsupervised machine learning algorithm would be K-means clustering. Another example you can state is Apriori. While the former is for Clustering type, latter is for Association. The best use of these two different categories of machine learning is to use them together. So when you use unsupervised and supervised machine learning techniques together, you can easily and effectively use the output of unsupervised machine learning as the training set for supervised machine learning.
While many think AI, Machine Learning, and Deep Learning as synonyms, it is not so. Each is distinct and so is their purpose and functionality. In fact, there is an interesting relationship between the three in a straight line. You can call machine learning as a subset of AI (Artificial Intelligence). Similarly, Deep Learning is a subset of Machine Learning. Let us see some of the algorithms and models that support this. In fact, a machine learning algorithm would be a series of actions or computations. Best way to understand this is to think of Random Forest. Think of applying random forest to a dataset. The result produces as an output will be this algorithm. The model will change if the data of the algorithm changes. Another way to look at it is to use the same data with a different algorithm.
Basically, two important steps to understand in case of Machine Learning are Training and Inference. Treat training as a process to optimize the whole mechanism. Here, we use an algorithm for a specific purpose. The purpose is to derive a mathematical function that has an ability to reduce any kind of errors in the training data. Once training is over, inference comes into the picture. It helps in making predictions on the basis of new data that is coming in. Now, let us try to understand what is Supervised Machine Learning. In Supervised Machine Learning, we use algorithms trained with labeled datasets. These datasets are highly structured or organized. Here, you use independent variables as input and get numeric or binary results as output. As a result, we use this technology to predict the future results on the basis of future inputs.
Supervised Machine Learning Is For Future Predictions
Supervised Machine Learning is of two types. First one is Classification. In this, the output is a category. Like, this or that, good or not good, relevant or not relevant, etc. The second type is Regression. Here, the output is a value. Like, dollars, temperature, etc. Support Vector Machines (SVMs) fall in the first category.
Any enterprise if not working in any of these three technologies now will certainly be in trouble tomorrow. These three technologies include Artificial intelligence (AI), Machine Learning, and Deep Learning. In fact, all three have a deep connection with each other. These three are right now the most powerful transformational technologies available in the current period. And all three are set to touch most aspects of our lives with or without us knowing about it. That is the penetration these will have on the mankind. As a matter of fact, the combination of AI and ML are taking practical shape in production rather than just demonstrating their possibilities in academics and R&D. Rather it has become a most sought after enabler in today’s technology. Industries are working on it with real-life use cases and ROIs. The adoption is increasing at an exponential rate across the globe.
Many startups working in this area are able to integrate machine learning functionality well with business requirements and draw out appropriate results to enhance and automate business processes. And the results are phenomenal. In fact, it brings a tremendous increase in power, availability, applicability, and flexibility of resources. Probably without the adoption of these technologies it would be impossible to explore the amount and accessibility of digitized data flowing from various sources. An improvement in efficiency is only possible despite an increase in complexity and volume of data. It is giving good results only because of the machine and deep learning algorithms that are the driving factors of AI. As a result, the future is appearing to be more promising with the help of fast adoption of machine learning in the enterprises. In fact, companies not adopting it even now will be out of the race automatically soon.
Machine Learning Is Changing the World Faster
In all these circumstances, control, access, and ownership of data is the key to drive your business.
Information security is the utmost priority for any business these days. While there are a number of projects that a CIO/CTO/CISO can initiate in his organization, few are important to keep on top of the agenda. These projects are not a one-time activity. These are of continuous nature. They basically work on the pattern of PDCA. Plan, do, check, and act. That means deployment is not the end of the project lifecycle in this case. Rather the real project begins from there. Once you deploy any information security projects, there is a need for regular audits and enhancement. In fact, technology is changing and progressing too fast. The same implies to its negative side too. The more you secure it, the more it becomes vulnerable. As a matter of fact, threats to an organization are not only there from the external world. It is equally threatening from inside.
To cope up with all these threats and vulnerabilities, there has to be an assessment mechanism in place in the organization. Following is the list of 22 Information Security Projects for an organization. These are all critical irrespective of the size and volume of the business. If these are not in place, ensure them to be right in place.
- Vulnerability Assessment
- Data Loss Prevention (DLP)
- Mobile Device Management (MDM)/Enterprise Mobility Management (EMM)
- Artificial Intelligence/Machine Learning for security
- Security Automation
- Security Operations Changes
- Security Awareness Initiatives
- Cloud Infrastructure Security
- Cloud Access Security Broker (CASB)
- Monitoring Improvements
- Patch Management
- Multi-factor Authentication
- Security Information and Event Management (SIEM)/Security Analytics
- Application Security
- Firewall Deployment/Management
- Regulatory Compliance (e.g. PCI Compliance, GDRP, PSD2, NIST)
- Privileged User Management
- Incident Response
- Intrusion Management
- Identity As A Service (IDAAS)/Single Sign-On
- Endpoint Security
Information Security Projects If Not Started In Time Can Lead to A Big Loss
Another point to note here is for the top information security projects currently being implemented within your organization, how do you ensure to place key determinant in place to get the approval in time. Otherwise, your information security projects will remain only on papers and never will see the light of the day.
Zoho is probably among those unique businesses where the business model, business benefits, customer value, and product value remains same irrespective of the size of the business of the customer. So whether it is a single professional, single person company, or a multinational; the pricing and all business & support prepositions remain same. There is no disparity. There is no confusion. And hence there is no differentiation. That is the beauty of this international Indian company having more than 6000 employees on board working in more than 19 offices across the globe. Well, the latest news is its Financial Suite, Zoho Finance Plus is 100% GST compliant. Rather, it has a lot more to offer than a number of other popular and tremendously costlier products. Basically, it all about awareness of the beauty of the product. It offers the least investment and great benefits to the business.
Various businesses fighting with their existing business applications to cope with GST regulations and requirements must have a look at the smooth operations and outcomes Zoho Finance Plus has to offer. That too at a very nominal cost. GST is a mandatory regulatory financial requirement for all businesses. The product offers complete integration with any legacy business applications. What businesses lack while working on other expensive applications is a 360 degrees visibility into their order and fulfillment cycle. That even lacks in most of the world-class business and ERP suites. That is where this product takes the front seat ensuring zero accounting errors and a trouble-free tax period. With the launch of this GST compliant financial suite in April 2017 that too well before the official roll-out of GST by the Ministry of Finance, Government of India, it demonstrated its phenomenal strength, depth, and dedication towards its customers.
Zoho Finance Plus Is A Complet Finance Suite++
Zoho Finance Plus, like its other products, being a cloud-based product, has zero capital investment and nominal operational cost. As Sivaramakrishnan Iswaran, Director of Product Management, Zoho, explains it better saying, “The proliferation of smartphones, broadband connectivity, and upcoming GST regimen is a great opportunity for businesses to move their accounting and other operations online. With Zoho Finance Plus, businesses get a beautiful interface to manage their transactions day to day and file their GST returns, all from a single platform. Zoho Finance Plus simplifies returns filing for businesses and increases compliance.”
GST is a business reform rather than merely a tax reform. Being mandatory for any business, the success of its implementation depends largely on technology infrastructure and right strategies in place for a business. Zoho Finance Plus is becoming the first choice of millions of SMEs thus empowering them with a right application for invoicing, filing tax returns and other critical transactions. That too along with being completely GST-enabled and compliant. Basically, it is not a single application that any business thrives and survives on. There are a number of financial apps that require comprehensive integration to communicate with each other and seamless exchange of data. Zoho Financial suite does the same thus ensuring management and key users have real-time information for taking fast and right business decisions. Filing return on GST portal becomes just a matter of a click of a button.
Zoho Finance Plus Includes Different Modules On Single Database
Zoho Finance Plus includes different modules like Zoho Books, Zoho Invoice, Zoho Expense, Zoho Subscriptions, and Zoho Inventory. For filing GST returns while using these modules, there is no data duplication across apps or requirement of a manual addition of transactions. Everything works in a smooth flow, flawlessly. In fact, Zoho Books create monthly returns automatically. Rather, it is just a matter of filing the return with a click of a button. In addition, there is an automatic matching and reconciliation of transactions. So, along with One Nation, One Tax, it becomes One Vendor. There are many other key features of this product. Like greater visibility into orders and payments, faster reimbursements and accurate accounting, and so on.
The pricing model of Zoho Finance Plus(https://www.zoho.com/in/financeplus/) is quite simple. It is INR 2,999 per organization per month. This includes 10 users having access to multiple Zoho Finance apps. All these capabilities of Zoho gave ample trust to GSTN to select Zoho as a GST Suvidha Provider (GSP) (https://www.gstn.org/ecosystem/). In total, there are not more than 70 GSPs in India (https://www.gstn.org/gsp-list/). Iswaran adds, “Being a GSP ourselves helps in cost optimization and providing a great experience for our business users. Furthermore, we will leverage our in-depth expertise in developing platforms and ecosystems to support a thriving community of Application Service Providers (ASP) connecting to GSTN through us.”
In fact, Zoho (https://www.zoho.com/) has made accounting and reconciliation simpler and hassle-free by partnering with banks like ICICI and Standard Chartered to give an entirely a different kind of experience to its customers using Zoho Finance Plus.
Recently Archive360 has been Included on Microsoft List of Partners Helping Customers in Their GDPR Journey. The list is a part of the Microsoft’s latest blog with the title “Leverage the Channel Ecosystem to Create GDPR Offers”. GDPR or the General Data Protection Regulation is impacting all organizations across the globe that perform any kind of business with EU firms. In this context, I had a discussion with Dan Langille, Global Director, Microsoft, Microsoft P-AE, Archive360 (www.archive360.com). Here are the key components of the discussion we had:
What were the qualifying criteria for becoming Microsoft Partner in tackling customer’s GDPR issues?
Dan: Partners, such as Archive360, that have services and/or solutions which assist customers with their journeys toward GDPR compliance must be nominated for consideration by a Microsoft employee (usually the Partner Development Manager). Those nominations go to a team Microsoft has within its overall One Commercial Partner (OCP) organization which reviews and approves (or declines) the services and solutions.
What value does it add to an organization becoming Microsoft Partner?
Dan: Microsoft’s investment in curating this list of partners is yet another great example of Microsoft’s commitment to being a partner-led organization focused on collaborating with partners to drive high-value business outcomes (in this case GDPR compliance) through the Co-Sell motion between partner sellers and Microsoft sellers.
What additional responsibilities does it bring with this partnership?
Dan: Partners get recognized for inclusion in programs such as this through a company-wide understanding of (and alignment to) Microsoft’s go-to-market priorities. Archive360, as one such partner, we incur no additional responsibilities here other than to have brought to market a qualifying solution that is also listed in Microsoft’s internal OCP Catalog.
With so many partners on board, does Microsoft apply any performance measurement approach for each of their partners?
Dan: Many Microsoft programs do have performance criteria, but this is not one of them due to the complex nature of GDPR compliance and the myriad ways and means for customers to meet their obligations. (As an aside on the number of partners in this program: The list is actually quite small and exclusive relative to the sheer size of Microsoft’s global partner network and relative to the number of companies around the world that are or might be affected by GDPR.)
Recently I had a discussion with Sejal R. Dattani, Marketing Analyst, Zoho on custom apps and their impact on business. Is custom apps a costly affair for organizations? According to her, building custom apps for your business is a one-time investment. Below are her valuable views and comments on the same.
Is your organization using the right tools? For years, companies provide their employees with packaged software that’s proven and widely used. But business has begun to change. Today, custom apps are quite affordable and easy to build. More businesses make their own apps to run their daily activities. Here’s why we expect more businesses to start adopting custom applications in the next few years.
When you depend on the same packaged software as your competitor, it becomes difficult to outrank them. To get an edge, you need to update your processes and implement changes frequently to offer better services.
Since custom apps have become easier to build, even people without a technical background can build a software to manage data and automate their processes. And, when you have applications that work exactly the way you want, your teams can react faster to customers’ changing demands.
The time required to develop custom apps has drastically reduced from months to weeks. For example, with a cloud-based DIY platform like Zoho Creator, you can launch your apps without installing new software or configure servers. And if you need some expert advice, you can always get in touch with certified developers to help you out. What’s more, when you create an application on Zoho Creator, you don’t have to waste your time and money re-building it for various operating systems. Your app automatically works on mobile devices, allowing your team to access vital information and follow tasks at any time of the day.
Custom Apps is a one-time investment
Businesses nowadays realize that packaged software is rigid. They make you change your business to fit them. And to make things worse, packaged apps are often incompatible with your existing services, too. Custom apps, on the other hand, let you change them to fit your business and are even integrate with your internal applications and other third-party services. For example, think of a scenario of running a retail store. You can integrate with a logistics service like FedEx and keep your customers informed of their order status.
The number of businesses switching to custom apps will accelerate in the coming years. And judging from the benefits, it’s no surprise. Building a tailored solution that’s focused on scalability and efficiency, is an investment for life.
One of the hottest trends to emerge in the world of enterprise cloud computing is “multi-cloud data management,” which, in a nutshell, is simply keeping track of data assets that reside across multiple data centers and cloud services. As enterprises increasingly move IT operations to the cloud, ensuring the security, availability, and performance of their applications and data becomes increasingly challenging. Today, I speak with Tom Critser, co-founder, and CEO of JetStream Software, about his company launch and cross-cloud data management platform.
Q. Please tell me about JetStream Software.
JetStream Software is a new company, but we have a software engineering team that has been together since 2010, and we’ve invested more than 200 developer-years in our core technology. This April, we announced the JetStream Cross-Cloud Data Management Platform. Our mission is to give cloud service providers (CSPs) and Fortune 500 enterprise cloud architects a better way to support workload migration, resource elasticity, and business continuity across multi-cloud and multi-data center infrastructures. Currently, our platform is designed to complement VMware cloud infrastructures including VMware Cloud Provider Partners (VCPPs) and VMware Cloud on AWS. We are headquartered in San Jose, California, with a second development center in Bangalore, India.
Q. How did the company get started?
Our three co-founders and much of the engineering team have been working together for a long time. Our first startup was FlashSoft Software, which developed software for storage IO acceleration. Our objective was to enable enterprise flash memory in a host server to handle IO operations for “hot data” and to deliver the performance of enterprise flash storage, but without replacing the existing storage of the enterprise. FlashSoft was acquired by SanDisk in 2012, and then SanDisk itself was acquired by Western Digital in 2016. At SanDisk, the team grew in size, and we collaborated closely with VMware to design the vSphere APIs for IO Filters framework, which is a key technology for our new company’s cross-cloud data management platform. After Western Digital acquired SanDisk, we worked with Western Digital to establish JetStream Software as an independent company.
Q. Who is your ideal customer, and what problems are you solving for them?
Our ideal customer is a cloud service provider (CSP), serving enterprise customers, and in a similar way, the enterprise cloud architect. We address two key problems for these customers:
- The first is to take friction out of the enterprise’s migration of its on-premises virtual machines (VMs) and applications to the cloud. Enterprise cloud migration today is an expensive, hands-on operation, typically requiring a lot of professional services. There are new tools that help organizations plan their data migration and prepare configurations at the cloud destination, but getting huge volumes of enterprise data to the cloud with minimal disruption remains a challenge, and that’s the problem we target.
- The second problem we address is to help CSPs and private cloud operators deliver enterprise-grade resilience, availability, scalability, performance, and manageability, even across multiple data centers and services.
Q. You say the JetStream Cross-Cloud Data Management Platform provides “built for the cloud” data management capability. What exactly does this mean?
A lot of the technologies used in today’s cloud data centers were originally designed for a single-owner, on-premises operation. But CSP operations are different, so legacy enterprise data management tools aren’t always a perfect fit for the dynamics of cloud operations, such as efficiently managing resources across multi-tenant services, supporting dynamically changing workload demands, and providing mobility, agility, and recoverability across multi-site operations. Rather than trying to adapt legacy on-premises data management tools and methods to this strange new world, we built our platform from the ground up with these dynamics in mind.
Q. Tell us more about the newest product on the platform, JetStream Migrate. What makes it unique?
JetStream Migrate is a software product that enables the live migration of virtual machines to a cloud destination. That means that the VMs and their applications continue to run on-premises while their data is being moved to the cloud destination. JetStream Migrate is the first data replication solution for cloud migration to run as an IO filter in VMware vSphere. This design gives the solution some unique capabilities:
- It supports live migration of applications, even when their data is moved to the cloud via a physical data transport device.
- It enables live migration without snapshots, which is much better for application performance.
- It’s fault tolerant, so if interrupted, the data replication process resumes from the point of interruption.
- Because of the IO filter-based design, it’s a lightweight application that runs seamlessly within a VMware-based data center.
- It gives the administrator powerful capabilities, including the ability to accurately estimate the time required for data replication and the ability to automate many tasks.
Q. There are many solutions for cloud migration, so when would an organization choose JetStream Migrate over other options?
It’s important to note that JetStream Migrate is specifically focused on ensuring reliable data replication for live migration. It will typically be used in conjunction with other cross-cloud tools, such as VMware vRealize, vCenter and NSX. The technologies are complementary, and they each play an important role in a cloud migration project.
With respect to data replication specifically, the unique design of JetStream Migrate makes it especially useful when:
- Live migration is required, but data will be transported to the cloud on a physical device.
- The data migration network has insufficient or inconsistent latency or bandwidth.
- The cloud destination is based on vSphere, but the CSP is not running the entire VMware Cloud stack.
- A lightweight deployment is preferred, at the source, on the network or at the destination.
- Network reliability concerns and high data ingest fees to make a fault-tolerant replication process preferable.
Q. You’re launching with an impressive list of partners. Can you tell us how you’re working with these partners and what about JetStream Software caught their attention?
Our team has been engaged with VMware for a long time. We previously worked as VMware’s design partner for the development of the APIs for IO Filters framework, so we have been working with these APIs to integrate our products with VMware vSphere for years. Through our partnership with VMware, we’re now also collaborating with Amazon to support the migration of VMs to the VMware Cloud on AWS.
Because of our history partnering with Dell, EMC, Cisco, IBM and HPE, we’ve also resumed and further developed our partnerships with those vendors, starting with the JetStream Accelerate product, which was familiar to our partners. And because all of these vendors are rapidly developing cloud solution portfolios, we’re discussing our new solutions with them as well.
Q. JetStream Software appears to have deep technology credentials. What is your history working with enterprises and cloud service providers?
One of the unique advantages of our particular “startup” is that we’re launching with a full development operation and a technology foundation representing over 200 developer-years of effort invested. We developed and supported a software solution that was deployed at thousands of data centers, both large and small. In doing so, we had a front-row seat to the transition from enterprises operating all-on-premises to the cloud, hybrid cloud, and cross-cloud operations.
Q. You’ve just officially launched the company and the platform with the newest product on the platform. What can we expect later this year?
Our JetStream Cross-Cloud Data Management Platform is maturing rapidly. The focus of our first product has been to remove the friction from cloud migration. Our releases in the second half of the year will bring similar advantages to cloud disaster recovery (DR) and cloud-based disaster recovery as a service (DRaaS).
About Tom Critser, Co-Founder and CEO, JetStream Software
Tom Critser has more than 20 years of experience growing and launching software companies. Previously, Tom was GM of Cloud Data Center Solutions at SanDisk. Tom was a member of the founding team of FlashSoft Software which was acquired by SanDisk in 2012. Prior to FlashSoft, Tom was VP of Worldwide Sales and Business Development at RNA Networks, the memory virtualization software company, which was acquired by Dell. Prior to RNA Networks, Tom was the VP of Worldwide Sales and Business Development at Infravio, the SOA management software company, which was acquired by Software AG. Tom graduated from Oregon State University with a BS degree in International Business and where he was a Pac-10 All-Academic Team member. For more information, please visit www.jetstreamsoft.com, @JetStreamSoft and www.linkedin.com/company/jetstream-software-inc/.
I am listing 18 Information Security Pain Points that cause quite embarrassing situations in an enterprise. Each of these may cause minor to major losses to an organization. These losses may be in terms of finance, reputation, or business. In fact, all three have a deep connection. These lapses in information security may happen due to lack of knowledge of internal IT staff, lack of ownership of technology department, selecting a wrong technology partner, lack of sponsorship from top management, and so on. That is an altogether different topic to explore. Well, here are the top causes listed below:
1. Compliance related costs/requirements:
At times, your IT department is able to identify the non-complying issues. But is not able to get an approval for the deployment or mitigation. Even at times, it is not able to assess the mitigation process or actual cost for it.
There are various tools to handle it. Ensure that your organization has a strong tool. Also, ensure that it has latest patches and updates all the time.
3. User Behavior:
It is not about policing or trailing but having an eye on user behavior is important. A user might initiate a wrong practice to cause a huge damage to the organization. Another instance is about a user who might notice something unacceptable but is not initiating a reporting mechanism.
4. Keeping Up With New Technology:
This is of utmost importance to stay away from information security risks. Ensure it is not a blind chase. Include some intelligent factor in it.
5. Security Awareness Training:
Information Security is everybody’s responsibility. Right from the top executive to the delivery boy in the organization. Awareness and regular training are important in this regard.
6. Mobility Security:
A lot of business apps are not moving to mobile devices. Ensure the security factor while such deployments.
7. Application Security:
Exhaustive testing is the key to it. Have top class staff and tools for testing any loopholes or punctures in the information security framework of your organization.
8. Cloud Security:
It applies same as mobility security. Any initiative in this regard has to have in-depth analysis and assessment.
The whole world is in its grip. It, in fact, knows no boundaries. Ensure to take appropriate measures. Have regular audits and testing.
10. Third Party/Supplier Security:
Your external threads have to be as secure as you.
11. Organizational Politics/Lack of Attention To Information Security:
Engage top management in every step. Make them understand that a small gap may cause a huge damage.
12. Staffing Information Security:
Have background checks done. At least for all crucial positions.
13. Data Loss/Theft:
Data is the new currency. Treasure it and protect it as real money.
14. Accurate & Timely Processing of Security Events:
If you are not conducting any within the organization or for your external stakeholders, start it. If already doing it, ensure precision.
15. Malicious Software (Malware):
Testing, Audits, and Evaluation. It has to be a cyclic process. Regular iterations.
16. Endpoint Security:
All other information security measures will fail if you have a glitch at endpoint security.
17. Lack of Budget:
If an investment is important and crucial. Ensure it to happen. Any delay might end up in a suicide of the whole enterprise.
18. Firewall/Edge Network Security.
Hope, these 18 crucial information security pain points help in understanding the appropriate needs of your organization. Do let me know if there is a skip or miss anywhere in listing these points.
Has Internet Explorer become the most vulnerable browser? Has Microsoft lost control over it? Or Is it that Microsoft is having no more focus on it? Whatever is the case, it is not as secure to use IE. At least in your enterprise environment. In fact, it is now a legacy browser having not much attention from its creator. I think three decades is a good period for a browser to rule the internet world. There have been issues with any browser for that sake. In terms of security it is not that earlier there were no threats and vulnerabilities using IE. But the action was always prompt from Microsoft to tackle those. That is not the case now. At least after the launch of Microsoft’s new browser Edge. It is a more secure browser as the company claims. But this also has certain issues.
Edge, in fact, is missing a number of legacy capabilities that Internet Explorer was having. Even out of the two which is more vulnerable browser is difficult to say right now. But for some valid reason, Microsoft is still installing IE on all Windows operating systems it is releasing in the market. IE, in fact, is currently the most eligible soft target for attackers. Chinese security firm Qihoo 360 calls it zero-day vulnerability Double Kill. The company confirms there is an advanced persistent threat (APT) targeting these systems. Qihoo 360 explains Double Kill as follows. It is an IE vulnerability that targets Microsoft Word documents for the purpose of attacking a device. The Word documents usually goes as an attachment. This document is not clean. It contains certain malicious shellcode. The shellcode provokes IE to open in the background process. That leads to an attack.
Internet Explorer has become the most vulnerable browser
The background process thus further prompts an executable program to be downloaded and executed. In fact, all this happens with the help of Internet Explorer and without any warning signal to the user on whose machine it is happening. Once this malicious document opens with Double Kill, it immediately controls user’s computer. That is the beginning of a ransomware infection. It also starts causing eavesdropping and data leakage. That makes IE the most vulnerable browser.
Is Secure Shell or what we call it as SSH completely secure? It is almost more than two decades when Tatu Ylonen from Finland realized a strong need for security components in the online transactions. Realizing that, he created SSH, a powerful protocol to access anything on the internet. What it does is, it creates a trusted access by means of encrypting all kind of communication that takes place. In turn, that secures it from any attack in transit. So basically, SSH builds a tunnel where every communication gets encrypted. So that, there is a secure communication between any two points. It was simple yet powerful. In fact, it was an immediate need of the online world. Hence, it was popular in no time. As a result, every OS and device vendor ensured to pre-install it in their software/device. Like, all Unix, Mainframe, Mac, or Linux devices had it.
Not only that, most of the network devices also had the SSH or Secure Shell in-built. The whole story is all about access. If it is so strong, then why there are so many cases of cybersecurity? It is because of various reasons. The first and foremost is that it is taken for granted as it comes pre-installed. I don’t think there is a technical attention in any organization to monitor SSH transactions within the organization and with the outside world. Rather, everybody thinks if SSH is there, it means complete encryption and hence complete security. But who will check for flaws in the system? and what about any customization need of the organization in this regard? Who will manage it? In fact, before you think of managing it, there has to be someone who understands it. As a matter of fact, encryption alone doesn’t ensure 100% protection.
Secure Shell Needs An Enterprise Wide Technical Attention
When we talk about SSH or Secure Shell, it is basically all about authorized access. The challenge for any organization is to protect its data from illegal entities. Let us see what are the main risks of SSH. As we know, there is a private key and a public key to access any data. A public key, in fact, relates to a lock and the private key is its key. The lock remains on a door and the key is in the safe hands of a person. The main risk is of granting access to critical applications in an organization. If keys are self-provisioned, anybody can grant access having rights to do so. As a matter of fact, all security tools fail if this happens. The risk increases when people start sharing keys. In those cases, it becomes difficult to catch the culprit after a blunder.
Another high-risk factor in case of Secure Shell is no expiry date of its keys. to avoid all these blunders, it is important to have an effective SSH key management mechanism in place. this should include periodic reviews, proper documentation, and appropriate IT controls.
Data-driven cloud skills development is becoming critical for any enterprise today. Whether you are in the cloud already or are in the process, you definitely need to upskill your technical staff for the same. In fact, even if you are completely having an on-premise model currently, you need to understand the technology from its core. Keeping that in mind, Cloud Academy reveals the general availability of training plans in the relevant field. These training plans aim to help CIOs, CTOs, and technical managers to chalk out target skillsets, assign training materials for cloud transformation, assess competence, certification, upskilling, and onboarding. Any training has to have a purpose. In addition, it should be measurable. One should be able to assess the progress and development. That is where training plans come into the picture. With the help of these plans, Technical Leads can map them well with the business needs.
Not only that. Data-driven cloud skills development helps you define target skillsets for your technical staff. Because then it becomes easier to assess each individual’s competence. Also, it helps you to ascertain if there is a need for any customization for any individual. So that, you can then assign each individual unique set of training materials to draw out the results at scale. As we all know, public cloud implementations are increasing at an exponential rate. In fact, a recent study estimates the global public cloud market to touch $200B in 2018. In 2017, it was around $150B. There is a minimum growth of 20 percent annually. But that is on the lower side. In reality, it would definitely be higher than this. Even if cloud adoption is at its nascent stage, there is a serious need of preparing organizations for its challenges, risks, and vulnerabilities.
Data-driven cloud skills development Help in many ways
Data-driven cloud skills development not only prepares you well for handling the upcoming challenges well but also makes you conversant with the volume and different kinds of options available in the market. Before any kind of deployments, it becomes necessary to map it well with the complexity of IT environments.
Before coming to the re-launch of Zoho Creator, I would like to talk about some unique features of the organization Zoho. Zoho calls itself the operating system for business. This becomes apparently true after the launch of Zoho One that is a complete suite of applications for an enterprise. I think this is the only company that has at least one product to cater to every major category of the business. Like, sales, marketing, accounting, customer support, and so on. Everything that this company creates for its customers is in-house. No collaborations and no outsourcing. Interestingly, Zoho offers many of its products free of cost to its customers. None of these products have any kind of ad-revenue model. It has more than 30 million users across the globe working in hundreds of thousands of companies. In fact, Zoho runs its complete business using its own products.
The new avatar of Zoho Creator comes with mobile app creation feature. Zoho Creator as we know is a low-code application building platform. What that means is now you have Mobile App Creator, Page Creator, Workflow Creator all together while creating powerful web and mobile apps. This is, in fact, a major update to the product. The latest version is Creator 5. There is a complete transformation in the core functionality of the tool. In fact, there is a significant enhancement and refinement not only in its core functionality but in all other modules. Now, the developers can design and develop mobile apps in their own native custom manner. Zoho One Admin Panel has the capability of large deployments and manages apps created by Zoho Creator in a simple manner. This, in fact, provides a unified interface to deploy and manage a complete suite of enterprise applications.
Zoho Creator with Zoho One Provides A Unified Enterprise Interface
Zoho Creator is more than a decade-old product. Hyther Nizam, VP of Business Process Products, Zoho Corp. says, “Over the last 12 years, the Zoho Creator platform has enabled citizen developers to design and deploy over two million custom applications. Having refined our low-code, no-code approach to app development, Zoho Creator has become the app builder for those of us with no formal programming and deployment experience. With this update, we are raising the bar yet again by enabling mobile app creation for both smartphones and tablets, no programming skills required. On top of this, all two million existing Zoho Creator applications are now automatically mobile-enabled.”
Nizam concludes saying, “Think about that for a second. Web applications built on Zoho Creator 12 years ago—before mobile operating systems like iOS or Android even existed—are automatically mobile-enabled and ready for deployment on smartphones and tablets with no effort on the user’s end.”
A latest VMware Research in collaboration with Forbes Insights reveals a wide gap between CIOs and end users in regards to acceptability and applicability of business applications in the enterprise. As a matter of fact, the business app is becoming a point of dispute in the Asia-Pacific Region. Basically, it is not about the quality of the business app in place. Rather, it is about its deployment and availability on various platforms that are becoming a pain point amongst the employees of an organization. It could be a feature rich app. But if you are not able to access it from anywhere anytime then it becomes a productivity deterioration factor. On the other hand, it could be available on all platforms but the app is not rich and stable enough to support all platforms. This kind of scenario also will create a high level of disappointment.
The third angle to this is slightly different. The app is stable and running properly on different platforms. But is not as feature rich as the end users expect. VMware is a leader in cloud infrastructure and mobility. This particular VMware research was conducted in APJ (new Asia-Pacific and Japan) region. The purpose the study was to interpret the impact of business apps on business and end users in terms of performance, acceptability, and usability. On one hand, most of the organizations in APJ are speedily implementing business apps. On the other hand, there is a high level of dissatisfaction among their employees. While the CIOs claim that the apps are the right fit for the organization, the employees or end-users feel that are causing loss of productivity. The employees believe that the apps are incapable of meeting their business requirements or are capable of creating new business opportunities.
VMware Research in APJ highlights
VMware research clearly states there is a wide gap between CIOs and end-users. This gap needs an immediate attention. It is important to create a right kind of atmosphere for productive collaboration and employee satisfaction. The title of the research is “The Impact of The Digital Workforce: The New Equilibrium of the Digitally Transformed Enterprise”. The study includes more than 2,000 global CIOs and end-users of large enterprises. The focus of study stays on availability and accessibility of business apps and how they impact their work and business. It includes Australia, Japan, India, and China. Very few end-users are not happy with the current situation.
Digital Marketing has a close connection with Customer Experience. In fact, the sole aim of organizations adopting digital marketing is to optimize customer experience. This, as a matter of fact, is a major shift. Because businesses are talking about customized and personalized customer experiences. More than 30% business belonging to world’s largest brands according to KPMG are increasing their budgets for online and mobile advertising. Similarly, around 35% of these businesses believe in the personalization of customer experience to gain more traction. For this, the best way is to merge services with technology. Because without technology it is not possible to achieve. Buying a product is not a primary concern anymore for a consumer. The demands are changing on this front. It is all about meeting their increasing expectations. Brands need to shorten the gap between them and their consumers. The whole paradigm is changing quite fast.
In fact, customer experience includes the high quality of the product that they buy. It also includes a meaningful conversation between brand and its consumers. Loyalty, these days, is not easy to gain. It needs more transparency, closeness, promptness, and a deeper concern. As Olivier Njamfa, CEO, Eptica says, “Consumers are ever-more demanding, and expect fast, high quality and informed conversations with brands if they are to remain loyal.” Thus, when we talk about Digital Marketing, it is all about the developing a brand’s strategies with a high focus on the customer. The more is the trustworthiness of a brand, the higher are the revenues. Businesses are talking about digital footprints and digital impressions. A good amount of past data can provide in-depth analysis of their past successes and how to leverage that now to gain a rich customer experience. It is more to do with Analytics now.
Digital Marketing and Customer Experience Go Hand in Hand
As a matter of fact, enterprises are keeping their generic marketing separate from digital marketing. The strategies of two are entirely different. Therefore, businesses need to improve their current framework. They need to map it well to create a highly effective digital marketing funnel to ensure an enriching customer experience. It is, in fact, the time to discover an entirely different approach with an aim to re-optimize business infrastructure in order to cater online consumers and a drastic improvement in customer adoption. Before the execution of digital implementation, it has to have a solid base of relevant business cases. The whole purpose is to increase opportunities to build a flawless multichannel customer experience framework.
What would be an organization’s top barriers to using machine learning? Machine learning, big data, IoT, Industrial IoT, etc. are the newest trending technologies. These are becoming, in fact, global phenomenon. Rather, these are becoming a necessity for businesses to strive. the businesses that are not thinking in these terms now will have a tough time tomorrow. Like any new technology, these also have their own advantages and constraints. But early adaptors will definitely have an edge over their competitors in time to come. The same applies to individuals also. Technology professionals, CIOs, and CTOs who are not thinking about applying these technologies in their organizations will have a tough time tomorrow. They will have to answer to their management for not doing it in right time. As a matter of fact, these technologies will emerge as big differentiators among success and failure of your competitors.
What comes to your mind when we think of top barriers to using machine learning in your organization? To me, the top factors could be data, skill, applicability, budget, deployment, and approval of top management. All these points have a deep connection with each other. The core strength or the power of driving this lies in the technology leadership in your organization. You, being the technology leader, have to be the frontrunner in preparing business cases and presenting them well to the top management for their approvals. Obviously, the key lies in your hand. Thus, you only have the power to open this mysterious lock and show the world to your management. Now, let us look at these barriers one by one and see how to handle those. Accessing and preparing data is the core of machine learning, IoT, and IIoT.
It is Important to Identify Top Barriers To Using Machine Learning
An organization needs to have a clarity on the source, volume, relevance, and applicability of data. The actual task, in fact, begins once you are able to identify the top barriers to using machine learning. Merely availability of humungous historical data will not suffice the purpose. It has to be in a proper shape. As a matter of fact, preparing data is a bigger task in this direction. You won’t be able to allocate a budget in lack of clarity of your objectives. Finding relevant skill in the market or developing from your current pool is the next big decision you need to take. Running a pilot and showing good results to management is fine, but are you ready to deploy it in production? Are your operational systems ready for this? Finally, top leadership’s sponsorship and engagement is important.
What would be an organization’s top reasons for exploring machine learning? Well, it depends on the business vertical in which that organization is working. But a few business cases apply to all. Like, Workforce utilization and optimization could be one of the top reasons for any kind of industry. In the current scenario, any industry strives on performance. Market Forecasting, for instance, is always a top agenda for any production and service industry. Especially, the ones that are consumer-centric. A small glitch in the market forecasting could cause a huge damage to an industry. Not only in terms of finances but also in terms of reputation. Similarly, there are industries that strive for various other kinds of forecastings. For instance, weather forecasting. If it goes wrong, the whole farming and other businesses depending on it can go haywire.
In fact, any kind of forecasting needs to have a high level of accuracy. Without that, the system will not survive for long. It will lose its sanctity and credibility.The same is true for Marketing Analysis. A wrong kind of marketing analysis may result in wrong product recommendations and offers. Some more business cases for machine learning are Logistics Analysis, Physical Security, Price Prediction, Supply Chain, Cyber Security, Advertising, Healthcare, Scientific Research, Clinical Research, Preventive Maintenance, Customer Service, Customer Support, Fraud Detection, Social Network Analysis, Communication Analysis, and so on. This is not all. There are ample use cases. And, in fact, for a specific industry, there would be specific business cases in addition to generic business cases. As a matter of fact, while applying any of these business cases in your environment, it is important to chalk out the significant benefits your organization would draw out of it.
There are ample business cases for Machine Learning
When you think of business cases for machine learning, think of the results or gains after deployment. These could be an improvement in customer experience, increase in sales, gain in competitive advantages, reduction in errors and mistakes, reduction in risks and vulnerabilities, faster responses to opportunities and threats, faster risk mitigations, lowering of costs and expenses.
With the release of Backup and Replication v7.4, NAKIVO achieves a new landmark. The key feature is Automated VM Failover for Near-Instant Disaster Recovery. In fact, there are a number of new features like Automated VM Failover, Self-Backup, File Recovery to Source, and so on. NAKIVO Inc. is a dynamically evolving virtualization and cloud backup software enterprise. Starting its operations in 2012, this US-based organization aims to produce excellent data protection solution for Hyper-V, VMware, and various cloud environments. Its double-digit growth for the last 20 quarters consistently and consecutively demonstrates its strength in the product line it caters to. Within a short span of 5 years of its existence, it has more than 10,000 deployments across the globe. The organization banks on its post-deployment support to its customers. It is not easy to achieve more than 97% in customer satisfaction with a stringent focus on customer support.
With all this, NAKIVO www.nakivo.com is undoubtedly among the fastest-growing global organization data protection spectrum. Its customers include China Airlines, Honda, Microsemi, Coca-Cola, to name a few. Operating from 124 countries and with around 3,000 channel partners across the globe, the organization is spreading its wings fast. The latest Backup and Replication v7.4 comes with 11 new and unique features. The main purpose of these new features is to simplify existing disaster recovery mechanism by increasing convenience and comfort to the customers. Some of the prominent features are as below:
Businesses can’t afford any downtime in their mainstream applications and infrastructure. In fact, zero downtime is the demand of businesses from their vendors. NAKIVO Backup and Replication v7.4 aim to achieve this goal of their customers by helping them restore their systems in case of disaster without undesirable delays. Automated VM Failover replicates VMs to the DR location and then runs a single failover job. With the help of this feature, businesses can transfer their workloads to the DR location without the loss of critical business time. This near-instant feature thus minimizes downtime to least. Moreover, since the fully automated process performs with perfection with the help of re-IP rules and network mapping. This, in turn, happens in an automated manner without any manual interventions. In fact, there is no need for manual reconfiguration or replicas.
Though this is in Beta stage, it has a lot of promises to fulfill. The feature takes care of recovery of accidentally deleted or corrupted files to their source VMs or a different location. This, in fact, doesn’t require recovery of the entire VM first. That makes the whole recovery process fast and accurate. The recovery is performed from deduplicated VM backups.
NAKIVO Backup and Replication v7.4 protect AWS EC2 instances if so desired. By enabling and configuring this feature, it stores the backups onsite or in the cloud as per the requirements of the business. With the launch of this version, the number of recovery points per EC2 instance is scaled up to 1,000 now. That brings a high amount of reliability and different recovery options as per the suitability of the business environment.
The product has the capability of running jobs at the highest speed possible using the available bandwidth thus optimizing the whole process of Backup and Replication. Even if during the peak hours network administrator limits the bandwidth used by data protection processes, it automatically sets such limits on a per-job basis. This, therefore, allocates sufficient bandwidth for critical business applications.
This is a fantastic feature from NAKIVO Backup and Replication system. In case, there is a failure in VM or the physical server that is managing the VM backup software, a new instance can be installed within no time. As a matter of fact, it takes less than a minute. Keeping in mind that reconfiguring all the backup setting manually is a time-consuming task, v7.4 tackles this complex situation in a simple manner. It automatically backs up the complete scope of settings that exist in its web interface and as a result saves these self-backups in available backup repositories. The moment it senses the installation of a new instance, it imports all the previous settings from the backup repository instantly. This includes schedules, preferences, jobs, and inventory.
We all are aware that manually finding a VM, replica, or a job in a large setup of the virtual environment is a humungous task. It is, in fact, time consuming and painful. NAKIVO Backup and Replication v7.4 have a feature of Global Search. That helps to find any job, repository, or a transporter in a convenient manner. In fact, it is not about only finding any item. It also helps to perform individual or bulk actions straight from the search results in an instant and easy manner.
Flash VM Boot helps in many ways. It can near-instantly boot VMs directly from deduplicated and compressed VM backups thus decreasing the downtime tremendously. These VMs, in fact, can be utilized for various useful tasks like sandbox testing. This is an extended support feature of Flash VM Boot to Hyper-V VMs.
Merely availability of VM backups or replicas does not ensure their recoverability. NAKIVO Backup and Replication brings in a surety factor in this. For the purpose of ensuring recovery of VMs, it includes the Screenshot Verification mechanism. What it does is after the completion of every backup or replication job, it can automatically test the recovery of VM and take a screenshot of the OS. This screenshot can be handy for the reporting purpose. The feature now is available for Hyper-V and VMware VMs.
The product saves space and thus infrastructure, hardware and upkeep costs. It not only saves space in the backup repository but also truncates transaction logs of Microsoft SQL Server on the source VM. Actually, this truncation happens automatically after the successful completion of each VM backup or replication job. This, in turn, helps in saving a lot of disk space and thus avoiding a server crash.
As the title of this feature suggests, it allows a quick recovery of tables and databases. This recovery happens from deduplicated backups thereby not requiring the full VM recovery first. This definitely saves a huge amount of critical time of the business.
NAKIVO Backup and Replication v7.4 have a live chat feature that enables to connect with their technical support team instantly in case of any need.
Bruce Talley, CEO, NAKIVO Inc. says, “We always take our customers’ feedback into account. That’s why NAKIVO Backup and Replication v7.4 introduces cutting-edge features that we know our customers want to protect their virtual environments even more efficiently. We also work to make the product more user-friendly and convenient.”
A fully-functional free trial of NAKIVO Backup and Replication v7.4 can be downloaded at www.nakivo.com.
· Trial Download: /resources/download/trial-download/
The key task of Artificial Intelligence (AI) is to make your machines and devices think and thus act intelligently. And this happens with the help of corresponding software that makes machines and devices to think and perform like humans. That is why it is not a misnomer to call Machine Learning a subset of AI. In this case, ML’s main focus stays on using data and respective algorithms. Because with the help of these two, it keeps learning and predicting. Organizations bend towards AI and ML initiatives is thus not a myth. It is happening. And it is happening for a reason. There are plenty of ML frameworks for enterprises to choose from. Like Caffe, SparkKML, Theano, Torch, Keras, Tensorflow, and many more. Similarly, there are a number of ML tools like Jupyter, OpenCV, NumPy, Beaker, Pillow, Pandas, Zeppelin, and Scikit.
For creating Machine Learning environment an organization needs to depend on various data sources. These could include customer data, employee data, data outsourced from data brokers, data from various government and non-government sources, location resources, market resources, and social media. This is how big data comes into consideration. To manage this volume of humungous data, you need a different set of tools and environment. There are various data brokers in the market these days providing voluminous data. These brokers include Acxiom, Experian, Oracle Data Cloud, and DataSift. Similarly, when we talk of government sources, it could be census data, for instance. Location resources like TomTom and Google maps are also quite popular these days. Social media data sources are Facebook, Twitter, LinkedIn, Google+, Instagram, Pinterest, etc. Non-government data sources would include Weather, Environment etc. And then there are Market sources like SymphonyIRI, Nielsen, Reuters, and so on.
Machine Learning Is a Revolution In Industry 4.0
Machine Learning can be helpful in various ways. For instance, it can help in workforce utilization and optimization. But that is not all. There are a lot more business cases for an organization for using ML.
I can build multiple use cases to depict the importance of Modern Requirements Management Tool – build on and for Microsoft Team Foundation Server. But before talking about Modern Requirements4TFS, let us understand what the biggest threat to the success of a project is. It is the business or user requirements. A minor ambiguity in requirements finalized at the initial stage of a project can create a major threat to the project at a later stage. PMI (Project Management Institute) research says there are only 3 things that can kill a project. Those are People, Process, and Communication. There are 7 factors that lie in these 3 segments. Lack in any of the 7 factors can significantly cause a delay or failure. PMI says, “Provide the project team members the tools and techniques they need to produce consistently successful projects.”
That shows the importance of a good tool like Modern Requirements4TFS in the success of a project. A tool that covers people, process, and communication to get a strong hold on the project throughout its lifecycle. “Most project problems are caused by poor planning,” says CompTIA. And mostly in all those cases, it is the requirements that play havoc. The orthodox requirements management tools have failed drastically in capturing modern day’s dynamic requirements in a crisper manner. Requirements captured only in text leave a lot of scope of loopholes and ambiguities. That is why you need a tool that captures requirements with visual contexts and use cases. This not only helps in capturing the requirements clearly and unambiguously but also helps in setting a faster pace of development of the project. Project Insight states the top four causes of project failures are:
Team Foundation Server from Microsoft is popularly known as TFS. To use it to its full potential and to draw out the maximum benefits for requirements management, you need an intelligent tool. Modern Requirements Management Tool is one of those. It is uniquely integrated into Microsoft Team Foundation Server and VSTS. Modern Requirements4TFS gains you access to a series of new hubs and features. It is a web-based tool having multiple modules. These modules empower you to have a full control of requirements management. With its help, you can define and manage quite easily and run the show efficiently. It can integrate with Microsoft Team Foundation Server and Visual Studio Team Services thus giving you a great level of control on Microsoft platform. The complete setup can be arranged as per your requirement. It can reside on-premise in case you have limitations on the cloud. Else it can reside in the cloud.
Most of the organizations building for Microsoft platforms use Modern Requirements4TFS for Requirement Management. In fact, Modern Requirements Management Tool – build on and for Microsoft Team Foundation Server is the most appropriate tool in such an environment because all requirements are stored natively as work items in TFS (Team Foundation Server) / VSTS (Visual Studio Team Services). Many organizations also use TFS for tracking of development work elements, testing, and release management. In the nutshell, Modern Requirements users get following features and functionalities:
The success of any project lies on three key factors These are Open Communication, Effective Collaboration, and the use of Intelligent Systems. It is a tool like Modern Requirements4TFS that enables these three factors to help any organization in achieving effective results in project management and team collaboration. Sharing your plans visually acts as a catalyst in the process of presentation and understanding. Thus, it reduces the risks drastically. On top of it, the tool improves scope management, change management, and project quality.
DH2i, a leading provider of multi-platform high availability/disaster recovery, just released Version 17.5 of its DxEnterprise software. To gain a better understanding of this software and what it offers to Enterprise IT users, I recently sat down with Don Boxley, the co-founder, and CEO of DH2i (www.dh2i.com).
What key issues are you addressing with the v17.5 release of DxEnterprise?
Boxley: At the core, the issue comes down to how enterprises can achieve digital transformation: IT teams are under pressure to do more with limited resources. Businesses and other enterprises are increasingly dependent on data, so they require high availability – as little as 10 minutes of downtime can be disastrous. Finally, with sprawling infrastructures, IT resources are strained.
To overcome these issues, enterprises need to think strategically about integrating legacy infrastructure. For the majority of today’s enterprises, the primary obstacle is with the legacy infrastructure. It is expensive to maintain, both financially and in terms of required labor. This is the most pressing issue facing today’s enterprises.
How might v17.5 help with the legacy infrastructure constraint enterprises face?
Boxley: DxEnterprise, and more specifically, version 17.5, alleviates the legacy infrastructure issue with an application-based approach. It supports an industry-first unified Windows/Linux automatic failover and fault detection. The company initially had a Windows focus. This new release builds on earlier DxE versions, allowing management and servicing of Linux, with automatic Windows/Linux failover for SQL Server 2017. It features a single-console Windows/Linux management. SQL Server 2017+ users benefit from this multi-platform environment as it allows them to move workloads and data to and from any cloud. They can also use it to scale cloud-based data analytics and business intelligence.
Another key component of DxE v17.5 is it enables users to create a new class of distributed frameworks which allow workloads to move to the best execution venue, based on computational and budgetary considerations – we call this Smart Availability. This often means fewer operating system environments are required and reduces time spent on system maintenance. Ultimately, it frees IT, professionals, to spend their time on higher-yield activities that impact the bottom line.
You talk about Smart Availability, as opposed to high availability. Can you describe the difference and what it means for the IT user?
Boxley: High availability refers to overall uptime, while Smart Availability is an evolved, strategic approach in the direction of that same general goal. Smart Availability decouples databases, Availability Groups, and containers from the underlying infrastructure, and hence allows workloads to move to their best execution venue. High availability alone is often counterproductive: it simply adds to the infrastructural complexity without regard to the overall objectives. Smart Availability instead adapts to the overall business objectives and constraints.
Are there any other applications you see being created by this new release?
Boxley: The single cross-platform service, with its built-in HA capabilities, will be useful to managed service and public cloud providers. They’ll be able to increase recurring income by offering this service to customer applications – previously we saw many of these providers leaving it to the customer to ensure high availability. With this release, providers can include high availability as a service.
We’ve also included enhanced features such as InstanceMobility for dynamic Smart Analysis workload movement, and intelligent health and QoS performance monitoring. These help ensure DxE v17.5 cuts costs, simplifies IT administration, and frees the IT team up to do the most impactful work in the enterprise.
Performance Monitoring has become a critical factor for all business applications running in an enterprise. There are various reasons for it. Firstly, no application functions in isolation. There is always a dependency either backward or forward. It is either pushing data to another application. Or is pulling data from another application. As a matter of fact, infrastructure is not away from the scan of performance monitoring. Everything has to be in sync. Because it is the overall performance that matters in the organization. So, even if your infrastructure is modern and state-of-the-art performing at a rocket speed it loses its value if applications residing on it are under-performing. Let us have a look at the rising trends in performance monitoring for both – application as well as infrastructure. The key contributors to it are Big Data, Machine Learning, IIoT, etc. SaaS delivery model plays a significant role in this.
Overall, industrial architecture is not as simple as it was a few years back. This is the age of complex applications. Features like Containerization, Microservices, and Heterogenous clouds to tackle data overloads are becoming critical and important. Data, in fact, is flooding from all directions. It is very important to analyze it. There are various ways to adapt to a proper performance monitoring mechanism. It is necessary to learn about each. These are Code-level APM (Application Performance Monitoring), Network Performance Monitoring or NPM, Performance Testing, Real User Monitoring or RUM, and Synthetic Monitoring. Code-level APM is a good tool to report load time and response time. In fact, it smartly figures out the lines of code in the application that are causing these troubles. New technologies obviously require new approaches. For instance, technologies like containerization and microservices need tracking of the tremendous amount of data to ascertain performance.
Performance Monitoring Needs Better Tools
Looking at the complexities of performance monitoring, vendors offering APM and similar services include machine learning methodologies to attain optimum results in data mining and generate important information. After all, it is the performance that matters most in an organization. And it becomes the responsibility of the IT department to ensure the performance of any employee in the organization doesn’t get any negative impact due to the poor performance of an application or infrastructure in place. Usually, it is somebody on the top in the technology department of an organization to own this responsibility. In fact, this is the person who is answerable for any kind of issues in the performance.
An increase in the applicability of Artificial Intelligence (AI) in real life is responsible for the development of Chatbots. In fact, it has reached a maturity level where it is interestingly able to engage a prospective customer quite significantly. There is a steep growth estimation in the Chatbots industry from 2015 to 2024. The valuation as a report from Transparency Market Research is US$100 million. It is about to touch US$1000 million in 2024.That is a phenomenal jump in all respects. On a similar note, a PWC research paper states AI will be contributing US$20 trillion to the world economy by the year 2030. That signifies a tremendous potential of this market in coming years. The drastic drop in data rates is one of the key contributors to this phenomenal growth. Alibaba Group is investing a large pool of their profits in R&D, especially AI.
Note that Alibaba is having a sales growth at the rate of 20% currently amounting to US$100 million. These figures are from a report by Market Realist. In fact, such a stupendous growth in Chatbots industry is changing the whole paradigm of the business model. New avenues are coming up in infrastructure, cost-saving alternatives, and business transactions. As a matter of fact, Artificial Intelligence (AI) is contributing significantly to business development in almost all industry segments. And, in fact, that is getting quite positive results in terms of business growth and development. It is turning out to be a highly beneficial proposition. Rather, Chatbots are becoming a streamline companion of the operative system. It is, therefore, important to understand how you can augment market size through Chatbot. If you are able to do that you can easily minimize operating cost and maximize productivity.
Chatbots Industry Sees Tremendous Growth in AI
As a matter of fact, Chatbots are a clear example of a new form of collaboration between manpower and machine. That means if you are able to harness the power of artificial intelligence in business automation, especially Big Data, it can increase your operational efficiency manifold.
What is a digital transformation? Different enterprises define it in their own way. The extent of going wrong is directly proportional to the defect in understanding it rightly. The more you are away from the right definition, the more are the chances to go wrong. Basically, it is how you integrate digital technology into all areas of your business. The transformation will lead to a drastic change in your way of functioning. It will, in fact, change how you operate. In addition, it will also change the manner in which you deliver value to customers. That means when it happens at your end, your customers will also experience a huge amount of transformation in the way they are doing business with you. It demands a big cultural change within the organization. In fact, not within the organization, but also around effecting all stakeholders in one way or the other.
Digital transformation involves digital technology extensively. In fact, there will be a large amount of mobility entering into your day-to-day functionality. It also demands a continuous change involving a lot of experimentation, failures, and successes. The best way to get the best out of it is to keep challenging your status quo. The more you challenge, the more you get ideas to improvise it. Digital transformation is important for all kind of businesses. Also, for all sizes of businesses. So, whether you are a small business or a large enterprise, the importance remains the same. It is important to adopt in order to stay competitive in the market. If you don’t take appropriate steps in this direction, your competitors will leave you behind in no time. It also keeps your relevance intact. But it is just not merely moving to cloud as a lot of business leaders think.
Digital Technology Is A Lot More Than Moving To Cloud
It is important for an enterprise or a small business to understand digital technology and digital transformation correctly. What specific steps do they need to take? What changes in the job profiles might happen? Rather, what new jobs come into existence? In fact, what could be the right framework to start with? Do you need a consultant to start? What changes in business strategies will happen? And most importantly, what is the real worth of it? What are you gaining out of it? All these things are very important to understand.
“Digital technologies continue to transform the work, how we interact with colleagues, and the value we deliver to clients and customers,” says Asoke Laha, CEO, InterraIT at the event ‘The Future of Digital Transformation’ at their Noida office. “This means all decision making is data-driven, and leadership must focus on providing insights into marketing and customer engagements,” he concludes.
There are few things to notice about data breaches. Enterprises are preferring cloud over on-premise for less critical applications. That means information security trends are changing noticeably. But more important is to understand is Cloud driving shift in security spending. In fact, is that shift upward or downward. Studies reveal security budgets are rising consistently across the globe. A portion of credit should go to the grand publicity to security breaches. Especially breaches like Spectre and Meltdown. These are actually unanticipated risks that could take an extra bite than the IT budget you keep for security. In the nutshell, security has become one of the top two budget components. The first one being the cloud. As a matter of fact, it might take the top slot in time to come. Despite all constraints, security budgets are moving up. And that trend is visible in all size of companies.
Even though Cloud service providers maintain their own security controls internally or through third parties but still it is a topmost concern of businesses. Rather few enterprises depend solely on their cloud service providers to raise an alarm on data breaches. According to a report on an average, almost 20% of IT budget is allocated to Information Security by most of the organizations in 2018. Around 5% of organizations say their spending on information security will be less in 2018 than earlier years. But that is negligible. That means more than 95% of businesses are spending more on information security than the previous year. Organizations are spending more on application security than hardware and network security. The security spending trends are changing drastically. This shows a substantial impact of cloud on these spendings. Testing and Performance are becoming two major thrust areas in cloud environments for identification of data breaches.
Data Breaches Are One Of The Biggest Threats
Such proactive approaches to testing will decrease their reliance on cloud vendors. Because they would get automatic alerts before their cloud vendors notify them of data breaches.
Io-Tahoe has just announced the General Availability (GA) release of its smart data discovery platform. I sat down with Oksana Sokolovsky, CEO of Io-Tahoe, to better understand the data challenges facing modern enterprises, and how Io-Tahoe is attempting to address them.
You’ve just announced the GA launch of the Io-Tahoe platform. What challenges are you hoping to address with it?
Sokolovsky: I founded Rokitt Astra (Io-Tahoe) in 2014, together with Rohit Mahajan, our CTO and CPO, with the goal of providing the go-to platform for data discovery. The modern digital enterprise faces a complex set of challenges in maximizing the business value of data. For one, enterprises struggle with how to integrate a growing number of disparate platforms, with a formidable volume of data stored across databases, data lakes, and other silos. This makes it difficult or impossible for organizations to comprehensively govern, and ultimately utilize enterprise data.
How does Io-Tahoe address these challenges?
Sokolovsky: We built Io-Tahoe with the goal of providing a fundamental building block for all data discovery. This vision entails making data available to everyone inside the organization and automatically weaving through the data relationship maze to provide actionable insights to the end user.
The platform is built on a machine learning base. It uses machine learning to identify data relationships, including within both metadata and the data itself. It operates in a “platform agnostic” manner and allows organizations to uncover data resources across diverse technologies.
The platform enables a variety of disciplines in the data field – from analytics to governance, management, and beyond. It also commits to a leverageable view of data – data insights should be available to everyone in an organization. This is made possible through an easy-to-use interface, built on a scalable architecture.
We’ve also included a new Data Catalog that allows organizations to compile or enhance data information so it can be leveraged across the organization.
What do you see the future has in store for data discovery?
Sokolovsky: In two words: dramatic growth. A recent report from MarketsandMarkets (1), for instance, predicts data discovery market expansion from $4.66 billion, its 2016 estimated size, to $10.66 billion by 2021. This represents a year-on-year compound growth rate of nearly 20 percent. Most of this growth will be in Europe and North America, with retail services, financial services, and utilities as three of the largest opportunities.
The primary foundation of this demand is the increasing need for data-driven decision processes, but other factors are also playing a role in driving this explosion. A few other factors we’ve identified include regulatory pressures, such as GDPR, the rise of intelligent technology, which utilizes predictive analytics in a smart computing functionality, the shortage of qualified data scientists, the explosion of available data, increased demand for understanding it, the monetization of data assets, and unification of data platforms and management.
It sounds like it’s perfect timing for your release of the Io-Tahoe platform. Can you explain why this launch is so exciting for the end users?
Sokolovsky: I’ll be glad to. The GA launch of our data discovery platform is opening our unique algorithmic product to all enterprises. The machine learning aspect will allow them to auto-discover patterns and relationships in their data, and the Data Catalog promises to guide data owners and stewards through business rules and data policy governance.
For example, it can automatically uncover data across the entire enterprise in a matter of minutes, rather than weeks. This reduces labor costs and allows organizations to tap into potentially valuable data.
It also offers self-service features, empowering the end users to engage and share data knowledge. The Data Catalog feature, in particular, enables users to govern data across heterogeneous enterprise technologies, comply with regulations such as GDPR, and automate the previously manual process of data discovery. This will increase efficiency and use of enterprise resources.
How about a use case – can you give us a clearer picture of what Io-Tahoe looks like in practice?
Sokolovsky: Sure – we’ve actually developed three representative use cases to illustrate how customers could use Io-Tahoe. First, the systems use case: the platform can help them understand data lake and database migration. It can also help with system migration, modernization, as well as M&A system integration/divestiture. Second, the data analytics use case: this comprises analytics improvement, increased revenue potential, and improvement of complementary products. Third, the regulatory use case. The Io-Tahoe platform can assist with data governance, as well as regulatory compliance.
Has Io-Tahoe already seen an application?
Sokolovsky: It has. We have multiple successful examples to share with you. First, a customer used Io-Tahoe’s platform for data discovery and impact analysis as part of its re-platforming efforts. The customer’s analysis time was reduced three times, and cost decreased by 80 percent, with dependencies well-managed and accounted for.
A major investment bank used Io-Tahoe for data asset discovery and appointed a new Chief Data Officer (CDO) to manage data assets. The organization reported similarly positive results, with the data discovery process becoming automated, reliable, and less labor intensive. This freed staff, including the CDO, to focus on analytics.
It sounds like it’s the perfect timing for Io-Tahoe. Do you have any last words or thoughts to share?
Sokolovsky: I want to emphasize, we’re excited about the opportunity to use our technology to address growing, real-world challenges with data discovery. Few of our competitors are addressing these issues. Enterprises require effective and comprehensive access to their data, regardless of where it’s stored. They require data governance, and compliance with regulations, along with a deeper view and understanding of data and data relationships. Hence, we believe Io-Tahoe may soon be a priority purchase for every CDO.
(1) Data Discovery Market by Type (Software and Service), Service (Professional and Managed), Application (Risk Management, Sales & Marketing Optimization, and Cost Optimization), Deployment, Organization Size, Vertical, and Region – Global Forecast to 2021. marketsandmarkets.com. January 2017.
It was quite an insightful day meeting industry leaders, academicians, and students together. The event was at InterraIT. In fact, it was an in-house event but the point of discussion was no less than a topic of global relevance. The topic was – Doing business in India. We had Dr. AD Amar, Professor, USA Seton Hall University. He had come along with a team of young, energetic, and business enthusiast students of his University. Industry expert and veteran Asoke K Laha, President and CEO, Interra Information Technologies, was the key person to guide the students about all business tactics in India and the US. He is the person who has complete knowledge of both worlds. He has offices and operations in both the countries. That is what makes him a perfect choice to guide these budding entrepreneurs hailing from various countries and studying in USA Seton Hall University.
It was not only the students from the USA Seton Hall University eager to learn about Doing Business in India. We had a group of students from IIF (Indian Institute of Finance) along with their professor. The professor is also an active member of ASSOCHAM (The Associated Chambers of Commerce and Industry of India). Thus he was carrying a bountiful of information about the practicalities of opening and closing a firm in India. The opening of a company is now quite an easy process in the country. It takes hardly few minutes, in fact, to open a new company. And the whole process is online. The only hiccup that may haunt an entrepreneur is the process of closing a company in India. It still takes a number of years and involvement of High Court of India for the closure formalities. That is still a grey area.
Doing Business In India Has Become Easier
The overall motive of any country should be to facilitate young entrepreneurs from other countries to do business in India. As Mr. Laha says, learning about culture and people is the utmost thing for this.
Digital Healthcare is not a dream now. It is happening and evolving across the globe. Though the speed of evolution may vary from country to country. But every country acknowledges its potential and hence speedy adoption. It can not only help in creating a lean ecosystem but also an effective one. The manner in which industrial technology is advancing with the internet as its core backbone, pharmaceutical, and medical technology can achieve an astonishing amount of achievement. In fact, it is leading to a system that will have many benefits. Like, less labor intensive architecture will be a major thrust out of it. Also, the whole mechanism promises to be cost-effective. It will be an overall lean operation model for health institutions across the globe. The healthcare market is about to touch US$130 across the globe. By adopting digital technology, it is bound to create a new paradigm.
If you belong to this field, you must attend Digital Healthcare Conference that is happening in May 2018 in Bangkok. Health and pharmaceutical sector with the help of digital movement can definitely do wonders. It can happen with the help of cloud technologies, IoT (Internet of Things), AI (Artificial Intelligence), Big Data, VR (Virtual Reality), mobility, and automation. If it is the right adoption of technology, it can optimize its precision, efficiency, and speed to an unbeatable level. Recently, there was a survey by Microsoft in this regard. They call it Microsoft Asia Digital Transformation Survey. It clearly states the importance of medical technology in everybody’s life. It states that more than 75% leaders belonging to healthcare segment in the Asia Pacific understand the importance and gravity of transformation into a digital business which is going to play a key role in future growth.
Digital Healthcare is the Solution to many issues
A modern-day wellness provider can’t think of its survival without the adoption of digital healthcare. In fact, there has to be a complete healthcare mechanism that promises to deliver seamless health services. And, as a matter of fact, it can happen only with a proper synchronization of physical, biological, and digital systems. Rather, this is the only way to tackle critical health issues across the globe. Obviously, this needs a proper training process that educates people on the changing trends of medical technology. It is important for end consumers to leverage these latest trends seamlessly. Otherwise, the whole efforts will go waste.