This article is the final in a six-part series about monetizing IoT. Find the previous article here.
In earlier articles, I evaluated the monetization framework’s three levers that impact pricing: monetization models, monetization metrics, and product structure and packaging. The series also describes the IoT solution stack, which is a pyramid that contains three levels of the solution to be monetized: the large number of the edge devices and firmware; the fewer number of gateways that connect the edge devices and aggregate their data; and the single instance of a cloud analytics and control platform to gain insights from the devices in the network, which in turn might also communicate down the stack to manage or control the operation of the edge devices. With the previous elements in mind, evaluate these sample monetization approaches:
Monetize layer-by-layer, then collectively
First, consider a solution where pricing is applied at each level of the IoT stack. For example, assume that you’re monetizing an industrial IoT solution. Smart valves are at the bottom of the stack. A gateway layer is comprised of programmable logic controllers (PLC) and software. At the top of the stack is an analytics and control cloud-based solution that gathers information about the devices and communicates with various systems in the IoT headquarters to facilitate material planning and service calls. This provides advanced analytics to the operations personnel. Layers will be monetized individually, then collectively.
IoT stack level: Edge. These are physical valves with function and value provided by a combination of hardware and embedded software. For example, a product manager might decide to monetize the edge controllers with the following monetization levers:
- Model. The model is selected to match the physical nature of the device and the associated capex purchase model. It is often perpetual, with software maintenance to provide service and updates. Selling this with a perpetual model matches the perception that the device is a key element in a mission-critical factory and can’t be allowed to stop running because of an expiration event for a time-based model.
- Metric. The volume of liquid through the controller measured in gallons per hour.
- Product structure and packaging. There are two different options: base product, which is the base value that controls flow through the valve; or dd-on product, which is an option that will automatically shut off the valve and send a message to the controller if there is a problem detected in the flow.
IoT stack level: Gateway. These are physical devices called PLCs with connectivity to both the edge devices and the cloud analytics and control layer. This is a more complex device than the edge devices because they have rich functionality that can be monetized in more creative ways. Initially, the relatively straightforward monetization model selected will be designed with the following monetization levers:
- Model. Again, this is a perpetual model and the device is monetized similar to the edge devices due to its physical nature, capex buying model and need for high availability, such as no exposure due to a subscription expiration.
- Metric. Several metrics were chosen, based upon the different products being offered at the product packaging and structure level. The metrics will be the number of attached devices per instance of a PLC, and data throughput of the PLC to the cloud analytics and control platform measured in GB per month.
- Product structure and packaging. The two following initial structures were sold: Base PLC using the attached devices metric described above; and PLC connectivity option, using the data throughput metric described above.
IoT stack level: Cloud analytics and control. This is for the cloud or SaaS platform that runs on a dedicated public cloud, such as AWS, or private cloud. This is inherently a software-based service with ongoing costs and value, so it will be sold using a recurring revenue model. Being a data aggregation, reporting and control platform with various potential integrations to different back-office systems that manage material management, there are many different offerings that can be created. For simplicity, just a few common offerings selected by the product manager will be illustrated, with the following monetization layers:
- Model. A subscription-based model with usage rights, access to updates and a base level of services enabled for one year at a time. Longer term, it will also be sold with 2- and 3-year terms.
- Metric. The metrics chosen to represent scalability as the solution grows will be the total number of maximum connected PLCs in the network that are being managed by the cloud analytics platform and the total number of systems that integrate with the cloud analytics solution.
- Product structure and packaging. The three following initial structures were selected: Base management platform; advanced reporting option; and system integration option.
Other possibilities for monetizing IoT solutions
As an alternative to monetizing the individual elements of the stack, assume that the combination of the offerings above were provided by a single provider as a total solution. If that initial deployment of the solution for the customer was to produce laundry detergent, the overall model might be to monetize by the gallons of laundry detergent per year. Of course, some analysis is required to finalize the price, but it’s a much simpler way to sell the solution.
Approaching monetization within this framework will leave you well positioned for IoT success.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Last time, we talked about Lightweight Machine-to-Machine (LwM2M) protocol advantages in managing constrained devices. Constrained devices are usually battery powered and have limited memory and limited CPUs, which makes their lives rather short. Think of sensors, actuators and smart objects. When you consider LwM2M as a technology for managing these kinds of devices, what are some of the things you’ll need to be prepared for, as well as things you should try to avoid?
Getting devices onboard and managed is the first step in deploying IoT services. However, companies continue to relegate device management functions to a secondary thought.
First and foremost, you must formulate a plan and work with your device providers to be fully compliant with the standard you are going to use. Often times, a device claims to be fully compliant but doesn’t satisfy interoperability tests. Interoperability testing and standard compliance testing will ensure you don’t hit glitches once the devices are deployed onto the network.
Given IoT vendors are at various stages of supporting industry standards, compliance testing is critical to ensure project progresses on schedule. LwM2M automated testing platforms are available as well as device client firmware development SDKs.
Device firmware management is a major component of a device management platform. Being able to execute firmware over-the-air at a massive scale requires IoT service planners to consider the following:
- Correct communications between the targeted devices and the management server.
- Secure transfer of update files onto devices, timing of downloading files by devices and scheduling updates for them.
Here, LwM2M makes it easy for you. The standard offers a well-defined firmware update process. It also helps to let the management server select the right moment to start file transmission and inform the device when it’s time to install updates. It provides standardized protocol for device management, while also enabling enough extensibility to enable device-specific proprietary features.
The protocol’s framework enables users to consider up-front requirements, desired data structures and features, and ensure that the devices can perform well not only in PoC, but at production volume as well.
One of the major benefits of this standard is its offering of well-defined data models. LwM2M provides a light but secure communication interface with an efficient data model, representing specific service profiles such as connectivity monitoring, temperature readings or firmware updates.
With a strictly defined data model, it helps to reduce interoperability problems. The protocol’s data model provides easily accessible and reusable semantics for both device management and application data, according to OMA. The Internet Protocol for Smart Objects Alliance provides an open, reusable Object Model. Users can access, contribute or define their own objects.
Today, most of the commonly available LwM2M SDKs on the market are vendor-specific, resulting in a single device operating correctly only with a chosen platform. As a result, compatibility becomes problematic. Many IoT implementations rely on messaging protocols such as MQTT, and users can run into this exact problem. It’s worth noting that, despite being an open protocol, MQTT does not standardize any conventions for communicating device management operations such as configuration changes, telemetry data format or the mentioned firmware updates.
Everyone can define an entirely different format of messages to be sent to or received from devices. It creates an iron cast between device agent and management platform, making switching among vendors a major problem and adding to huge costs.
IoT providers should consider choosing an open and industry standard-based platform. This means that devices, including their underlying structures and semantics, can be ported from one platform to another. It also means that the data structure developed for one vendor’s device can be re-used across a competitive vendor’s device. Reducing single vendor lock-in helps the entire ecosystem grow.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
At this stage of the game, it would be silly for any business or technology leader to overlook the massive potential of IoT. Discovering new sources of data and analyzing the trends found as a result can deliver insights that create compelling competitive advantages over less savvy rivals. As Gartner research vice president, Nick Jones notes in a press release, “The IoT will continue to deliver new opportunities for digital business innovation for the next decade…” But once the decision to explore IoT technologies has been made, many organizations flounder. In fact, it’s been reported that 74% of IoT projects fail. This is mainly because they are focused on doing IoT, instead of solving real business problems or enabling tangible new business opportunities.
The ideal approach for achieving optimal results for any IoT project is to start with the business problems the organization is trying to solve and then build or use a market-ready solution that addresses that very issue. For many businesses, however, leadership is only concerned with achieving the greatest ROI in the shortest period of time, instead of looking at how these projects can be built upon over time to generate new value for their customers and partners.
Luckily, most businesses already have the resources they need to tap into a trend that’s transforming industries, even though they might not know it. That trend is video intelligence.
Video cameras are nearly everywhere — at intersections, in airports, on factory floors and in retail stores — and tasked with keeping us and our property safe and secure. But the data these dumb cameras capture is only living up to a fraction of its potential. It’s also difficult to manage, because it transmits massive amounts of data very rapidly. Edge intelligence and faster networks such as 5G can help, along with the right data management strategy.
With layered intelligence and analytics capabilities, video can help usher in better efficiencies and less waste, improve public safety and create better experiences for citizens, customers and passengers, making video a top-tier data type that should be an important part of any organization’s data strategy.
Extracting insights from video
When analytics and video come together, they become a transformative force that can deliver actionable insights for data-driven decisions. With video data now available from almost everywhere in the physical world, video insights can support decisions about traffic, parking, transit, customer behavior and preferences, manufacturing quality assurance and much more. Here are a few examples:
Strengthen manufacturing: every year, 20,000 warehouse workers in the U.S. suffer serious workplace injuries, according to the U.S. Bureau of Labor Statistics. Aside from the human pain and suffering, each injury costs their employers an average of $35,000 for direct costs and $150,000 for indirect costs. With smart video, manufacturers can receive real-time alerts about safety violations, institute data-driven safety programs, improve incident response times, strengthen worker training programs and predict maintenance needs. This makes the workplace safer while significantly reducing business costs. It makes business sense to keep people safe, and it’s the ethical thing to do.
Revolutionize retail: By using video intelligence to gather insights into customer behavior and preferences in their stores, retailers can deliver more personalized options to customers and reduce product loss. For instance, one major retailer used video data to determine whether shoppers weren’t buying items in specific areas of the store because they couldn’t find associates to serve them. By deploying salespeople to those areas when needed, the store increased revenues by 15%.
Streamline transportation: transit organizations use video intelligence to detect crowded stations so the right number of buses or train cars can be deployed. Drivers can easily identify when and where parking spots are available and when mass transit would be a better option compared to driving. And cities can identify recurring traffic challenges, such as double-parking by rideshare drivers, and implement practical solutions like designated passenger loading zones.
Enhance pedestrian safety: AI software can use video data to prevent collisions by issuing immediate alerts to drivers when pedestrians are near, or train operators when a car is idling on the train track or to bus drivers when a person is crossing the street. By collecting data on near misses, city governments can improve safety and reduce incidents at the most dangerous locations.
Create smart campuses: By using video data to understand and optimize classroom and building usage, universities are able to use the resources they have more efficiently and push out new infrastructure construction by several years, saving millions in operating expenses. Some of the campus data is even being exposed to students for research and class projects, giving them a more hands-on and relevant education for the technological jobs they will be applying to after they graduate.
What about Privacy?
Privacy, data governance and security are an important part of any data conversation. Everyone is concerned about maintaining control over their identities and personally identifiable information, so it’s great news that smart video can protect privacy and improve transparency even as it gathers and analyzes vast amounts of data. AI software automatically obscures and protects people in security videos in real time through pixilation, while movement and actions remain recognizable.
While not specifically a video technology, 3D Lidar uses lasers to sense the world in real time and can be analyzed by AI to glean a wealth of valuable insights and real-time alerts, without capturing any private information. 3D Lidar is a particularly tailor-made solution for sensitive areas that demand privacy, such as hospitals and schools.
When potentially sensitive video footage needs to be viewed, access can be granted to authorized officers with proper credentials and identification cards. The actions performed by these authorized individuals on the system are recorded and can be audited for full transparency, making it GDPR and privacy regulation ready. By adding additional security measures into the data pipeline, intelligent data operations provide data orchestration, governance, security, privacy protection and compliance to further protect the identities of those captured on video and ensure that the usage of data is transparent.
As data becomes more and more valuable, these innovations will provide ever richer insights that improve quality of life, economies and sustainability, all while keeping privacy protected.
Video as the on-ramp to IoT
Organizations should start with a foundation of video as a prime data opportunity and combine it with data from other IoT sensors, ERP and process systems to gain deeper insights into how each element impacts each other. This sets the stage for an exponential ROI, taking advantage of a common infrastructure that enables multiple entities to share data insights over a single pane of glass and unlock multiple outcomes.
For example, cameras are already common in many cities, and they have been steadily generating video data for years. Their initial focus might have been narrow, such as to improve public safety or manage traffic congestion. However, with smart city initiatives, some cities are using the same cameras to analyze and decrease traffic and parking. Less traffic results in lower emissions. Video cameras and infrastructure could also be used to collect economic data that may be applied to help local merchants thrive, determine real estate values and create new business opportunities.
Additionally, organizations involved in a smart city initiative can easily share data between entities for better insights and collaboration. By incorporating video insights into a smart city strategy, the city can improve the quality of life for its citizens while improving economic development opportunities. The city’s ROI can be vastly expanded by using data that’s already being collected in new ways, and combine it with other forms of data for more sophisticated intelligence, thus extending the benefits of video insights to a greater number of its citizens.
Although video has been in our lives for a long time, we are just beginning to tap into its full potential. By adding AI software to existing cameras and stored footage, we can better address a wide array of social and business problems and ultimately create a safer, smarter, healthier and more secure world for everyone.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The ever-growing population along with rapid urbanization will impact global cities the most. As per an estimate, more than 180,000 people move to cities every day. This migration of population to metropolitan and urban cities will result in reduced resources and increased congestion in cities.
However, there is a simple solution that enables cities to manage this large influx of people.
The transformation and adoption of IoT technology has empowered the development of futuristic infrastructures and a connected ecosystem that fuels sustainable economic development and high quality of life. These IoT based solutions have elevated traditional operations and created new services that make cities more efficient, secure and cost effective. They have offered exemplary benefits that make the urban lifestyle more enjoyable and comfortable. Some of these benefits are:
Conventionally, existing traffic management systems have many loopholes in them. For instance, on a four-way crossing, the traffic lights operate on the theory that only a fixed amount of traffic will convey from a crossing at a particular time interval. Practically, that is not the case at all.
In fact, it is quite difficult to even estimate and moderate traffic conditions. Even on a normal day, many factors such as congestion on crossings, speed of incoming traffic and drivers’ driving behavior play a role in determining traffic conditions.
IoT technology can be used effectively to manage the flow of traffic to some extent. It has enabled the development of high-end infrastructures that facilitate a smooth flow of traffic. Smart lights, signs, signals and even vehicles form a part of a connected vehicle-to-everything (V2X) network that enables the transportation of goods, services and people at an efficient rate.
An IoT based traffic management system can monitor several variables, such as location and speed of the vehicles. The smart traffic signals can collect data from connected cars and manage lights to facilitate the flow of traffic. Subsequently, the car drivers can also be notified to reduce their speed in order to encounter green lights on every crossing.
This connected network between vehicles and the traffic management system empowers a smooth flow of traffic and also reduces the emission of greenhouse gases, which makes life in cities more healthy.
Efficient utility management
The smart cities of the future must boast an efficient supply of energy and water. It is a pre-requisite without which there won’t be any meaning of defining a city smart. In the current scenario, where potable water and energy resources are constantly depleting, IoT can help cities manage the supply of utilities to the citizens.
IoT can be used by utility providers to manage resources efficiently. It institutes a robust distribution system for both water and electricity that helps provide uninterrupted supply to the citizens. Moreover, IoT can aid in the detection of water and electricity losses and empower end-users to reduce the consumption of these supplies.
IoT has also led to the development of smart meters and grids that help these utilities to distribute these resources based on distinct requirements in individual localities. This equipment can monitor the amount of water and electricity being discharged to end consumers, which further enable companies to bill their customers dynamically.
Effective waste management
Waste management is an important aspect of any city. As per statistics, the average person generates more than 0.7kg waste every day. For clean and sustainable living in a city, it is important to collect and manage this huge amount of waste.
IoT-based smart waste management systems help municipalities reduce their waste collection cost, reduce garbage surfeits in bins and even analyze the trash generated on a regular basis. This lowers the manifestation of diseases in cities and helps create a healthy neighborhood for the citizens.
The biggest concern for any person is to find a locality in a city that ascertains a safer environment for citizens. IoT-based smart cities, along with the above-mentioned benefits, also inaugurates surveillance conditions via CCTV cameras.
Even though CCTV’s are not a breakthrough piece of equipment, their embedment with face recognition capabilities via AI and machine learning technology is marvelous. These devices can be used to detect dubious people before they commit a crime. Furthermore, they can also be used by police and law enforcement to conduct search operations and establish law and justice in the city. For example, criminals can be recognized via AI-powered CCTVs and the IoT system can then instantly alarm the police.
In addition, IoT-based panic buttons and hotlines can be instituted in different corners of a city to help emergency vehicles reach an accident site quickly. Smoke and fire alarms fitted in buildings and monumental sites of a city can help the life-saving authorities in a similar fashion.
The establishment of such advanced security systems helps law enforcement and emergency vehicles to respond quickly in emergency conditions. Finally, they establish a secure and protected environment for citizens that help them live a peaceful and safe life.
The connected ecosystem developed from IoT provides countless benefits in terms of boosting efficiency, creating a sustainable environment and improving the lifestyle for the citizens. Its amalgamation with other disruptive technologies, such as AI, blockchain and machine learning, can even automate the day-to-day processes.
Smart Cities are going to be the future in which we’ll live. The investment in smart cities is expected to even cross more than $710 billion by the end of 2023. The vigorous change is clearly visible with IoT.
If you’re exploring IoT-enabled technologies for your enterprise, don’t overlook an asset you might already have on site, such as video cameras. IP cameras, which are digital video cameras that use the internet to receive control data and send back image data, are one of the most powerful data-collection tools available for IoT networks. And your business might be sitting on this source of untapped data and value.
By design, video cameras can provide a nearly complete visual account of operations and security by collecting data from multiple sources, which is a scenario well suited to IoT. Unlike other parts of the IoT ecosystem, these cameras’ IP cousins have been around for decades, meaning IP cameras have more mature technology with the robustness and dependability that comes with time-tested upgrades, as well as the ability to support a range of integrated technologies. This maturity also means manufacturers, installers and system integrators are more likely aware of cyber risks and have established best practices to prevent and monitor potential breaches.
IP cameras are also ubiquitous. What part of modern life doesn’t involve an IP camera? We use them to monitor security in our homes, businesses and cities. We communicate through video on our phones and computers. They help cars and trucks navigate. We also use them for both entertainment as well as education. And nearly any of these cameras can be used as a business intelligence tool, provided the extra volume of data their digital video files generate is processed on the edge. Fortunately, hardware and software for edge processing abound with providers such as Amazon offering solutions that can provide real-time, secure processing capabilities. Consider these scenarios:
Video analytics and retail
One reason brick and mortar retailers struggle to compete with e-commerce shops is a relative lack of data, such as how many people visit their shops and when, as well as data about shopping patterns, such as where people browse, where they linger longest and which items they handle most. Fortunately, there’s a physical solution for these physical stores. Discreet IP cameras can track customer movement and behavior, providing the data that can reveal underlying reasons for both successes and misses on the floor and help store managers course correct accordingly.
When this video data is integrated with other networked sensors and systems, such as beacons, physical retailers can benefit from the deep, insight-rich analytics ecommerce providers have relied on for years. These analytics can help spark improvements, such as optimized store layouts and staffing levels, more personalized customer service, merchandising and marketing, and table stakes such as security and loss prevention.
Smart cameras, smart city
Say you’re planning to drive into town for an important meeting, and you know parking can be a challenge. Using networked cameras with video analytics, city managers can send parking information to your phone via an application showing what’s available and where. Sweeten the mix with analytics backed by AI, and your drive can be cross-referenced with parking pattern data to predict where a free spot is most likely to occur when you arrive. You might even be able to reserve that spot. All starting with cameras.
Eye in the sky, food on the plate
Shifting from town to country, farmers are also using cameras on drones for visual assessments, including mapping canopy cover and drainage, assessing crop growth and counting plants to predict yield. As in the retail space, data from these cameras can be combined with data from other sensors measuring light, humidity, temperatur or soil moisture for even more detailed crop analysis.
Back in early 2015, I made a series of predictions about the trajectory of IoT at my company’s user’s conference. The talk took place five years ago, but things move fast in IoT and so I looked back to see how I fared with my predictions.
1. An explosion of devices
Prediction: 50 billion connected devices by 2020
Reality: Technically wrong, but right in the larger scheme of things. The 50 billion devices prediction began in the early part of the decade, promoted by Cisco and Ericsson. Cisco later upped their forecast to 500 billion by 2020. Gartner predicted 20.4 billion, 451 Research 8.8 billion and IDC 41 billion, but these figures didn’t include PCs, IT equipment and phones. However, 451 Research did place a stricter definition.
The predictions were a bit optimistic, but here’s the interesting part: The growth rate is faster than expected. In 2016, the world had 17.6 billion IoT devices and would have 30.7 billion by 2020, according to IHS Markit. In 2018, IoT devices were grazing the 31 billion mark, according to IHS said. The forecast for 2030? 125 billion.
If you look at some successful categories, such as WiFi, smartphones and LCD monitors, you see a pattern of forecasts being regularly upgraded. Thus, the growth rate points to positive outcomes, though I am now questioning the real value drivers behind this uptake.
2. Data variety will blossom
Prediction: We’ll see greater variety of data types, more complex data types and unusual combinations of data. For example, sensor or device specific context, such as GPS and location data, will be combined with different sources to produce unexpected insights.
Reality: Yes, there is a proliferation of new cost-effective sensing technologies and sensors that are leveraging a broader range of human senses, such as acoustic, vision and smell. These technologies are also sensing phenomena way outside the bounds of human capability, driving incredible improvements in physical and situational awareness.
But what’s equally interesting is the unusual combinations and use cases. For example, insurance companies have begun to experiment with IoT devices to monitor the health of pipes as a way to reduce water-related claims. A mining company in Canada, Syncrude, uncovered a driver safety problem while mining oil temperature data.
What’s the fastest growing market in IoT? Fast Food, said Charlie Wu, Product Manager at Advantech.
3. Big data will get bigger
Prediction: Cheap sensors and proliferating techniques for extracting value of data will mean that the conventional idea of a refinery having 100,000 or even 500,000 sensors gets “blown away”.
Reality: Correct, but I missed a bigger issue. The total amount of data continues to double every two years. Real-time data has also become one of the fastest growing categories of information and will double in its demographic heft to 30% of the data population by 2025, according to IDC.
Here’s the twist: In industry, we’re not seeing new sensors as much as companies taking advantage of data they already collect. For example, a graduate student working for Lonza — which makes specialty ingredients for food and pharmaceutical companies — figured out a way to potentially increase capacity by 15 to 20% using data Lonza already had.
4. Devices get dynamic
Prediction: Companies will shift from fixed-location and fixed-function sensors, shifting instead to dynamic sensors and devices that can be configured for different applications and locations.
Reality: I was expecting a move from specialty appliances and sensors to agile and dynamic solutions, equivalent to the “Software Defined X” phenomena, just as I am expecting this to take root in the processes and operations of traditional manufacturing. Efforts we see in the Open Process Automation hint at this agile approach.
The dominant paradigm is still mirrored in companies such as Petasense, who have successfully brought devices like this to market. The declining cost of hardware might also result in fewer dynamic devices and more rip-and-replace devices. For example, EDJX is coming out with nano servers made from inexpensive or used parts that can be run to failure.
I am now more conservative when I expect to see this phenomenon reach a critical impact. Which paradigm wins remains to be seen, but I don’t expect it to be either or.
5. Protocols gone wild.
Predictions: Vendors will develop their own protocols to meet their specific needs, whether driven by the nature of the information, the overall architecture or evolutions in technology that are cutting edge. Unfortunately, that will create confusion and incompatibility.
Reality: Unfortunately, this is true. While industry standards such as OPC UA have achieved broad adoption, each vendor tends to implement the standard differently, resulting in incompatibilities. If history is any guide, many of these problems will work themselves out as time goes by. In contrast, networking is seeing the evolution of different standards for different tasks. For example, 5G devices that require constant, fast and high-volume bandwidth with LoRa targeting applications that generate less and less urgent data. Over time, expect a few to emerge as favorites for particular tasks and use case scenarios.
6. Community and data sharing.
Prediction: Data sharing between organizations present tremendous opportunities for efficiencies, but questions about security and privacy will invariably make acceptance gradual.
Reality: True, but there’s more to it; some sharing is occurring. For example, Eli Lilly monitors contract manufacturers in real time to help ensure its product quality. Similarly, YPF uses it to monitor its third-party wind providers and their performance.
We see significant advances in supply chains. And there are massive needs for community and data sharing if we are going to solve the future energy generation and distribution challenges as renewables such as distributed energy and microgrids continue to disrupt the centralized approach to energy. But still, widespread communities are not here yet.
One big thing I missed here was blockchain, which could accelerate communities by enabling decentralized solutions.
7. Computing will become geo diverse
Prediction: While a building or a factory is rooted in one spot, the computing resources to better optimize them could be anywhere, and increasingly will be exploited by a wider variety of people. Public clouds will be necessary to efficiently manage this intelligence.
Reality: Partly right. Public clouds play a critical role in helping companies analyze large data sets or handling certain types of applications. But it’s becoming increasingly clear that a significant percentage of compute tasks and data storage will continue to take place locally, either inside facilities or devices. Concerns about latency, bandwidth availability and bandwidth cost are driving a shift to hybrid computing architectures.
To put it another way, we regularly swing from eras of centralized to distributed computing architectures. IoT is pushing the pendulum toward distributed.
8. Data silos will raise their ugly head
Prediction: Controlling every aspect of an IoT application, such as device analytics and data, might be good for vendors, but it’s terrible for customers.
Reality: Luckily, it’s not happening as much it could have as consolidation and exits in this market have helped. In addition, the approach is evolving: IoT platforms will increasingly not be offered as monolithic services. Rather, companies will focus on a few core areas of expertise, such as analytics, device management and security, and customers will weave together platforms from these offerings.
I think I generally predicted the direction of the market. What’s going to be most interesting is to watch the development of data communities. Technology can be easy to predict because hardware gets better and cheaper while algorithms become more accurate. Human behavior and cooperation are more of a wild card.
IoT devices have changed the way we live. Now, they are reshaping work.
There’s a lot of hype around IoT devices. But the hype is about to get real. The technology that we use to help us at home has made its way into the office. And it’s about to change the way we work in monumental ways.
Most of us start our work days by logging in and pulling up our calendars to see where we need to be. Then we open our inbox and wade through email to see what we need to do. Then we start navigating multiple apps to complete mundane tasks like filing expenses and approving time off. Then we begin searching for the information and insights we need to do our real work. It sucks up the bulk of our time and leaves us feeling frustrated and unproductive at the end of the day.
But work is about to get simpler and smarter. And we will be more engaged and productive as a result. IoT devices are no longer confined to our kitchen counters and nightstands. They’re on our desktops and conference tables and embedded in our digital workspaces. Now when we log on, we don’t have to search for anything. The most pertinent tasks and insights we need to focus on are automatically delivered to us in an intelligent feed on the devices we prefer to use — our phones, laptops, tablets and even smart watches. With the click of a button, we can execute mundane tasks in seconds. And it’s all delivered in context with intelligence, so we can focus on meaningful work and create value.
It’s exciting stuff, but according to James Bulpin, senior director of engineering at Citrix, we haven’t seen anything yet. I recently sat down with James to get his take on what the future holds when it comes to IoT devices and the future of work.
Steve Wilson: IoT devices have certainly changed the way we live. And while things have been painfully slow on the work front, they seem to be picking up speed. What’s driving this?
James Bulpin: Artificial intelligence-powered, voice-controlled IoT assistants like Alexa, Google Assistant, Siri and Cortana are already commonplace in many homes. This popularity is building pressure to integrate virtual assistant functionality into enterprise technology, as well, which will significantly reconfigure and enhance the employee experience. By 2021, Gartner, Inc. predicts that “25%of digital workers will use a virtual employee assistant on a daily basis. This will be up from less than 2% in 2019.”
Wilson: How we will we interact with our IoT devices at work?
Bulpin: Currently, the relationship between humans and IoT devices, such as virtual assistants, is predominantly a transactional one, relying on simple voice commands like “Alexa, tell me the weather forecast for this afternoon.” However, the AI and machine learning that power IoT devices is progressing rapidly, and in the near future, things like virtual assistants will be far more than just a voice or chatbot interface. In fact, the virtual assistant is likely to become a pervasive form of intelligence across the workplace that can surface through all digital platforms and resources, including data and apps, helping individuals to accomplish their daily tasks more efficiently.
Wilson: So are they going to put us all out of jobs?
Bulpin: Such innovation will always trigger some concerns over the technology’s impact on people and their job security and the demand for skills. But it’s more likely that IoT devices will make work a better experience for everyone. Workers will always face certain limitations due to our personal capacity for work or mental processing power, for example. IoT devices can help people and organizations to do and achieve things they couldn’t otherwise. Eventually, we envisage the creation of a level playing field between workers and their devices, built upon a relationship of mutual trust and collaboration, where the device undertakes more routine tasks for the individual, allowing them to focus on delivering their best work.
Wilson: How do you see the IoT market evolving?
Bulpin: The natural-language processing of voice recognition technology is growing steadily in sophistication, and eventually, conversations between an individual and their devices will be peer-to-peer, indistinguishable from human-to-human conversations.
Beyond this, the next logical step is for IoT technology to have the ability to understand human gestures. We’re already exploring the potential of gesture-recognition technology in all its forms, which will enable devices to interpret priorities and passion points, for example by identifying when human gestures have become more animated. Gestures could include pointing, eye gaze, and arm movement.
And before long, they will begin to independently solve problems and make proactive suggestions for workers. They will have the ability to calculate an individual’s workload, perhaps suggesting when to take a break, as well as to highlight the tasks that should be prioritized or delegated. By this point, workers may come to appreciate that IoT devices might even know “best” based on analysis of previous behavior and patterns.
Wilson: So it really is all about augmenting employees and empowering them to use their special skills to do meaningful work and create value.
Bulpin: Exactly. Ultimately, the IoT devices will help an individual to organize their work or tasks to keep them productive, while also understanding their personal capabilities. They may also begin to take on some monotonous, repetitive tasks to assist workers further, allowing them to spend more time engaged in high-level thinking, creativity, and decision-making, making it possible to focus on the best, more interesting work, most of the time.
With all the industry hype and confusing claims coming from various robotic process automation vendors, gaining clarity on the “different flavors” of this software is now a major issue. Without a definitive understanding of what robotic process automation is, organizations risk choosing the wrong options. Gartner agrees and predicts that through 2021, 40% of enterprises will have robotic process automation buyer’s remorse due to misaligned, siloed usage and inability to scale. Unfortunately, these issues typically reveal themselves after a proof of concept has been completed and attempts are made to scale automation programs more broadly.
To avoid these pitfalls, robotic process automation (RPA) must enable operational agility while not replicating slow and expensive IT projects, and still meet all the governance, security and compliance requirements. To achieve these goals, RPA must be underpinned by no-code, collaborative, business-led design principles that offer a platform for humans and digital workers to deliver automation capabilities. RPA must also deliver on its promise of being a potentially transformative tool — especially at scale — by enabling business users to inject greater speed, accuracy, productivity, efficiency and innovation.
To fully realize these outcomes, here are few key must haves to consider when selecting RPA technology:
A no-code, business driven initiative
Due to a scarcity of software development skills, those RPA technologies that require coding effort will soon be stuck in the work queue and suffer the same delays as traditional IT projects. The IT department’s proper role in RPA should simply be to uphold the necessary governance, security and compliance requirements for business-sustainable transformation.
Therefore, consider a connected-RPA platform that solves the long-standing integration challenge of system interoperability by repurposing the user interface and providing code-free connectivity with any system. This innovation also enables digital workers to use and access the same IT systems and mechanisms as humans, so they can automate processes over any past, present and future system independently of machine APIs.
Being business-led also means that non-technical users shouldn’t have to build or program digital workers. They should be able to use an intuitive OS to train and control them by creating automated processes by drawing and designing process flowcharts, then publishing them to a secure, central system. While designing the processes, users orchestrate a unique process definition language that both robots and humans understand, which also eliminates the need for coding.
Deliver real value
To offer real business benefits, a digital worker should also be a pre-built, highly productive, human-like, multi-tasking, full-service, modular and autonomous device that understands the context of processing tasks. These intelligent capabilities enable them to perform activities in the same way as humans rob, though faster and more accurately while also working with and learning from humans, other robots and systems.
The universal system connectivity of these digital workers can also enable business users to deploy AI, natural language processing, intelligent optical character recognition, communication analytics, process optimization and machine learning capabilities.
Drive collaboration and scale
Agile transformation at scale is only ever achieved through centralized effort, so insist on collaboration. RPA must enable users to share, reuse and expand automation capabilities by contributing their published process automations into a centralized repository. This enables the whole enterprise to manage, share, improve and re-use these assets. This centralized design also provides clear audit trails of all process automations and greater security, both of which are key factors in driving successful RPA outcomes.
To ensure that business-led and IT endorsed RPA projects are successfully sustained as they evolve, consider a vendor that employs a Robotic Operating Model delivery methodology. This defines how to most effectively identify, build, develop and automate processes at scale across an organization.
Alternative RPA options
Within this rapidly growing market, buyers are being misdirected towards a whole swath of RPA- branded tools which diverge from RPA’s original no-code, collaborative and business-led philosophy. These RPA vendors promise easy-to-use, instant record-and-deploy automation tools, but the reality is that they have code-heavy deployments, endless debugging activities, code-based versioning and project-artefact management, as well as dependency and change management overheads.
Let’s be clear: These automation technologies offer limited scaling capabilities, which is no good for businesses looking to use these tools to transform their operations.
The problem with desktop recording and the notion of a personal software robot is that a single human user is given autonomy over a part of the technology estate — their desktop — which introduces a lack of control and creates multiple security and compliance issues. Desktop recording spells trouble for the enterprise as it captures choices based on an individual’s interpretation of a process versus a central consensus for the best path. This obscures a robot’s transparency and hides process steps that, when duplicated over time, becomes a potential security threat and limit to scale.
There are two more major drawbacks of the desktop approach to automation. Firstly, if a robot and a human share a login, no one knows who’s responsible for the process, and this creates a massive security and audit hole. Secondly, if a robot and a human share a PC, there’s zero productivity gain as humans can use corporate systems as fast as robots. This approach doesn’t save any time or make the process any easier for the user.
Many of these desktop automation deployments never get beyond simple sub-tasks, which have been executed using an agent’s login and run on their own desktop. Although helping with that task, they deliver very limited capabilities and are not transformative at all.
Ultimately, choosing the wrong brand of RPA can limit the scale and potential of automation to the confines of the desktop and introduces a variety of risks. However, connected-RPA provides the platform for business-led collaboration, securely and at scale. In fact, by using this version of RPA, more than 1,800 of the world’s large organizations are achieving major productivity increases, greater innovation and improved processes so that they can stay agile and ahead.
As connected devices continue their explosive growth, thing commerce is moving from niche to mainstream. So, what exactly is thing commerce? Thing commerce is where connected machines, such as smart home appliances and industrial equipment, will make buying decisions for people by either taking direction from customers or by following a set of rules, context and individual preferences. Thing Commerce will ultimately include buying things, reporting a problem, requesting services and negotiating a deal.
The rise of Thing Commerce will see more companies and consumers interacting with virtual assistants in smart appliances who can make purchases on their behalf. The days of remembering to buy milk and ensuring that fresh produce is not past its sell-by date will be eradicated. These tasks will all be handled by interconnected machines that will deliver a frictionless commerce experience for customers.
However, before this utopian reality can happen, businesses need to rethink how they develop and test the software and systems to support this new age of commerce. There are three core elements that thing commerce providers must embrace as they build and deliver software and applications:
Test the user experience
With Thing Commerce, there are multiple products and services composed of a variety of technologies from an array of vendors. As a result, development teams across the ecosystem need to reorientate from focusing on testing code compliance to understanding the actual user experience.
Embracing a user-centric approach to testing ensures you identify errors, bugs and performance issues before they have the chance to impact the user experience. This requires adopting an intelligent test automation platform.
Intelligent automation and bug hunting are mission-critical
The only way to truly test the Thing Commerce ecosystem from the user perspective is to utilize an intelligent automation engine. Intelligent AI-driven automation creates a model of user journeys and then automatically generates test cases that provide thorough coverage of the user experience, as well as system performance and functionality.
In addition, the AI algorithms hunt for errors in applications based on user journeys automatically generated from this bug-hunting model. This approach enables teams to quickly find, identify and address problems before release.
Continuous testing, continuous learning and predictive trends
Testing any digital experience is not a one-and-done exercise. It must be a continuous process so that you’re monitoring the digital experience over time. An AI algorithm will watch test results, learn and look for trends. The learning algorithms will enable predictive analytics. For example, it can identify if the increasing delay in a particular workflow is likely to result in the connected system failing to replace out-of-date produce before the family meal.
Thing Commerce promises a world of possibilities that will free people from many mundane chores. However, for Thing Commerce to realize its potential, it’s essential that organizations change the way they develop software to ensure it delivers a consistent digital experience that delights customers. If not, Thing Commerce might erroneously deliver 20 bottles of milk to a smart fridge, delighting no one and incurring a lot of friction along the way.
By correlating a comprehensive understanding of your enterprise’s active IP address space against known threats as new data becomes available, including IoT and OT endpoints as they are connected to the network, you have intelligence you can act on.
A big barrier to effectively securing IoT and operational technology devices is simply not knowing they are there. Lack of visibility has been a recurring theme in FireMon’s annual “State of the Firewall Report,” as has managing complexity. The report doesn’t even begin to dig into the impact of IoT growth.
This year’s survey found that 34% of respondents reported having less than 50% real-time visibility into network security risks and compliance. From a firewall perspective, respondents are dealing with a lot of complexity – nearly 33% reported having between 10 and 99 firewalls in the environment, while 30.4%reported having 100 or more. Additionally, nearly 78%are using two or more vendors for enforcement points on their network, while almost 60%have firewalls deployed in the cloud.
Given the challenges firewalls create for security professionals, you can imagine how the exponential growth of IoT endpoints are compounding complexity. This is partially because they behave differently, and in turn, they must be onboarded and managed differently.
IoT and operational technology endpoints are driving enterprise network growth
IoT visibility has become a crucial area in the security market, and more traditional vendors — including Palo Alto, Checkpoint, Forescout and Cisco — are responding accordingly by acquiring IoT expertise and operational technology (OT) know-how.
As data center workloads migrate to cloud computing and infrastructure-as-a-service delivery models, a significantly larger percentage of the enterprise network will be comprised of IoT and OT endpoints. Previously siloed systems — such as security cameras and sensors, turnstiles, badge readers and even building control systems — mean IoT and OT is converging with more traditional enterprise endpoints such desktops, laptops and servers into a single, fluid IP-based infrastructure.
With everything on one network that no longer has a clear and defined perimeter, threats can easily migrate between the smarter, evolving OT areas into the IT domain, which makes visibility more essential than ever.
You must have visibility to manage IoT and OT complexity
Obtaining the required level of visibility demanded by an environment populated by IoT and OT endpoints requires automation, something FireMon’s report also identified as a frequent pain point for respondents.
IoT and OT endpoints demand automation, both from an initial discovery perspective and from an ongoing status perspective. Given the nature of the devices, endpoints such as security cameras and turnstiles can be added in large volumes at once or on a piecemeal basis. Devices must also be checked regularly to ensure they are operating normally.
Visibility means having a consistent view of all these endpoints, including basic characteristics such as connectivity and device function. It also means understanding the infrastructure it’s connected to and how even a simple OT device is in a position to affect more complex IT operations if it’s not properly provisioned from a security perspective. The way these devices connect to the network can open unexpected and unwanted paths into the heart of the organization. All it takes is one leak to drastically affect security and compliance posture.
In order to achieve adequate visibility, IT admins must see IoT and OT endpoints being onboarded in real-time so they can automatically apply global security and segment devices. This is necessary to limit the negative impacts of any anomalous activity, which must be easily detectable for security teams to proactively respond.
Establishing visibility of IoT and OT endpoints as part of the broader IT landscape enables you to begin tackling the unique complexity they bring to the network.
Diversity and variety compounds complexity
The diversity of IoT and OT endpoints should not be underestimated. In the same way multi-cloud environments add to complexity and confusion over shared security responsibility, there are new device behaviors security professionals must be ready to handle.
Just as servers, desktops, laptops and smartphones can all begin to misbehave and pose a threat to the corporate network, so can the many IoT and OT devices that are added to fluid IP infrastructure. The failures and glitches of more traditional hardware tend to be par for the course for security teams, but the variety and diversity of IoT and OT endpoints get more complicated, especially when they function in an unexpected way.
The consequences of these endpoints being compromised have significant ramifications. In the healthcare realm, the devices can be lifesaving, and in many other scenarios such as energy generation and delivery, water and waste management, and traffic control, their security is paramount to keeping people safe and maintaining quality of life for entire communities.
Because most of these endpoints are embedded, enclosed devices, often the ability to secure them using agents — as with more traditional IT endpoints — is somewhat limited. This means visibility, discovery and management must be much more network centric. In the same way IT security teams have evolved to manage and monitor hybrid cloud environments, IoT and OT endpoints have further diversified the environment to create a more dynamic infrastructure.
A unified view requires a network-centric approach
Reducing complexity and increasing real-time visibility means having a single platform that will discover, monitor and remediate when necessary — not just cloud, virtual, physical and software-defined network infrastructure, but also the proliferating IoT and OT endpoints that merge with traditional IP infrastructure.
A network-centric approach solves the IoT and OT device conundrum because it discovers and monitors the cloud accounts, network paths and endpoints inherent to traditional IT infrastructures. It also watches for changes in real-time to identify new leak paths that might be created by IoT/OT environments. By correlating a comprehensive understanding of your enterprise’s active IP address space against known threats as new data becomes available, including IoT and OT endpoints as they are connected to the network, you have intelligence you can act on.