This article is the fourth in a six-part series about monetizing IoT.
This article examines monetization metrics, which are the units of value that enable a company to generate more revenue through increased use or adoption of the product, and how product managers can identify and price the units of value that are important to customers. This article’s evaluation of unit structures for pricing products builds on concepts previously addressed in this series. The first article describes IoT monetization strategies—the why behind the work. The second article introduces a two-part framework for defining the what and “how” of monetizing the IoT stack. The third key concept in monetizing IoT– understanding monetization models and strategies — is addressed in this article, which provides a deep dive into different revenue models and how they may apply to your business.
What is monetization metrics?
A monetization metric is one of two structures — along with the product structure — that defines the units of usage or value to which individual product pricing is attached. In price book parlance, a monetization metric is the unit of “each” that’s used for buying items. In other words, pay more as you use more or it’s what you are counting to generate more revenue.
This is one area of monetization where art meets science. Properly structured, monetization metrics enable a company to generate more revenue as product use or adoption increases. However, improperly selected metrics can lead to revenue leaks or products that are overly complex to configure and price. For example, metrics might be the number of named users for IoT design software, the number of end points managed by SCADA software, or the amount of data analyzed in a cloud-based IoT analytics application. In each example, as usage increases, the customer receives more value and the supplier should receive more revenue.
Properly aligning a metric to value is an area where product managers can be creative to gain a competitive advantage and increase revenue. The goal is to structure price according to a meaningful unit value while calculating the actual price by business goals, such as profit, market penetration and brand. Increasingly, the value of products in IoT are created by digitally enabled functions. Product managers can adopt the principles used for structuring monetization metrics in other digitally enabled functions, namely software products.
How applicable is a metric?
Whether or not a metric is applicable for software and other digitally enabled products depends on four criteria:
- The metric should be simple so that customers know what they are buying. Defining metrics by a calculation is typically not transparent, making it difficult to determine what might happen as product adoption increases. For example, the number of analysis reports generated by IoT cloud analysis software is easier to understand than a calculation that involves data throughput, average daily data loads, and terabytes of storage consumed.
- The metric should represent a reasonable measure of value for the customer. For example, pricing engineering design software by the number of total employees at a company will not be viewed as fair, but pricing HR software by the number of employees at a company will seem fair.
- A metric is best if it provides a way for the supplier to generate more revenue if usage or adoption of the product increases. If the number of endpoints connected to a programmable logic controller increases, it’s reasonable that the supplier is entitled to additional fees.
- Actual usage should be measured so that compliance, supplier fees and customer costs can be accurately determined. For example, pricing software by the number of API calls that are performed in a month by an IoT endpoint is not a good metric if the number isn’t being counted anywhere.
What metric is most suitable?
When selecting metrics, keep in mind that some metrics might meet two or three of the criteria above, but not all four. For example, some software is monetized by the number of cores of the machine where the software is running. This might not accurately represent the value received, but the function might be so complex or dynamic that no other metric can be measured.
Products might be sold at different price points for different metrics, creating a stratified offering. Some IoT controllers might be sold at one price for the number of concurrently active endpoints, while another price might apply if the metric is the total number of endpoints; active or not.
Complex products or those that cross the IoT solution stack could be sold by multiple metrics. An IoT analytics platform that manages cargo container traffic through shipping port combines metrics, such as the number of users of the solution, the amount of cargo that is analyzed and the amount of high-risk cargo that is identified and verified at the port.
Conversely, solutions that include multiple elements of the IoT solution stack might even use a simpler metric. Using the same example of a cargo container traffic solution, a supplier might have performed usage analytics on previous deployments and may have determined that it can simplify the offering by setting the metric to be the total number of ships that pass through the port in a year.
In the next article of this series, we’ll look at another element of monetizing products: the product structure or bundling. The sixth and final article will tie together all the elements of the series.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With the emergence of 5G, big data is going to experience a seismic shift, promising data rates 100 times that of 4G, network latency of under one millisecond, support for one million devices per square kilometer and 99.999% availability for the next-generation network. The exponential growth in data velocity and volume under 5G will increase the complexity and demands of operational analytics, disrupting the way organizations ingest, store and analyze data.
5G is also poised to advance IoT by providing faster device and sensor connectivity with higher data capacities. By 2021, IoT endpoints will reach an install base of 20.4 billion units, according to Gartner. When paired with 5G networks, these endpoints will produce unforeseen amounts of data.
The true value in 5G-generated data lies in making it actionable, which is only possible when the data is analyzed in real time to make more intelligent decisions. To take advantage of this influx of data, both private and public organizations will have to redesign their data stacks to process information closer to the edge to cut down on latency.
So, what are the top considerations for organizations looking to capitalize on IoT data in a 5G world? Let’s look at the requirements:
Event stream processing
To make sense of the vast amount of data resulting from more than 20.4 billion IoT endpoints, real-time complex event stream processing (ESP) needs to go beyond simple data moving and aggregation to keep track of some key performance indicators (KPIs). The data needs to drive cognitive decisions, combining the insights from predictive and prescriptive analytics with the fundamental contextual correlation. These decisions need to happen very rapidly with ultra-low latency and closer to the edge of the IT network to facilitate machine-to-machine (M2M) communications.
Having a contextual state is crucial to making meaningful business decisions based on data generated by connected systems, but legacy ESP frameworks and some contemporary streaming technologies, such as Apache Kafka, KSQL and Kafka Streams, either offer static state — used primarily for enrichment — or a state that is isolated to an individual stream, limiting processing to very basic data models.
With the proliferation of 5G, most modern businesses are going to require cognitive decisioning driving robotic process automation that relies on complex data models and complex orchestration to truly differentiate themselves from competition. These modern applications depend on low latency decisions, resulting from reduced layers of technology used to perform high-impact, real-time business functions. This requires a swift and unified in-memory data processing platform that provides accurate answers and decisions.
Modern ESP frameworks will also need to offer the necessary responsiveness that IoT and other mission-critical applications demand. Oftentimes in a M2M communication scenario, there is someone or something waiting for a decision and a hint to act upon it. Without the ability to tap into the intelligence provided by data as close to the real-time event as possible, this data is destined to enter the caverns of dark data.
Lastly, to maintain the veracity of the decisions in relation to the data and information that drove it, the data platform will need to provide the traditional guarantees that a database provides. This includes atomicity, consistency, isolation and durability transactions that are required for most applications in the IoT, financial services and telecom industries. The needs go beyond simply storage level guarantees to include ingestion and application of rules and insights, ultimately driving decisions.
A new layer of infrastructure
The low latency requirements of IoT devices and applications can only be met with a new layer of infrastructure, such as edge data centers or micro data centers that are close to the end-user or devices they serve.
To capitalize on 5G’s influx of data, all industries will require scalable IoT data processing at the edge to process and analyze the data at a speed that retains its value and makes it actionable. In specific use cases, non-vital data will be able to be offloaded to cloud data centers. But when actionable decisions are needed, near-edge computing will provide organizations with the best chance to respond to events in real time.
Ultimately, IoT data processing at the edge requires the ability to conduct stateful, high-performance stream processing at scale on data in motion to deliver accurate insights. This approach combines data storage and stream processing to streamline the data stack to keep pace with the barrage of 5G IoT data.
Legacy database technology has traditionally been focused on analyzing historical data to gain a rear-view understanding of business performance. While this is important to the success of a business, in order to gain a competitive advantage, it’s critical to use machine learning to drive intelligent decisions on streaming data as the event is being processed.
During what is being referred to as the fourth industrial revolution, historical data about what has worked in the past won’t help organizations that refuse to embrace IoT. The key element to machine learning is finding a predictive analytics model that trains on historical data while also ingesting a high volume of constantly streaming data to operationalize it, all in real time.
For example, as cities begin to install more IoT endpoints and become more connected to the way citizens flow throughout numerous areas, IoT-driven insights may lead to more pedestrian-friendly designs or ones that improve the flow of traffic for vehicles. From an emergency services perspective, IoT endpoints could provide fire trucks, police cars and ambulances with optimal routing that produces quicker reaction times and saves lives.
It could also allow emergency services to stream a real time investigate an emergency to assess situations and allow more informed decision making. The use cases within cities are endless and will continue to grow as more endpoints provide a better look at the flow of people, cars and businesses.
In a 5G-connected IoT world, removing latency is key to making better informed business decisions that drive the success or failure of an operation. Those organizations and cities that don’t embrace the IoT and its billions of endpoints will fail to gain actionable insights and lag behind, but those that do will develop applications and improve people’s everyday lives in ways that were unfathomable with 4G.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Notifications have become a common ritual of modern life: a notification on your smartphone shows that app updates have become available, you tap to download them and now you have newly installed features, bug fixes and security improvements.
App stores have made this process simple for the roughly 3 billion smartphone owners worldwide, but what about the 41.6 billion machines, sensors, cameras and other connected devices that IDC forecasts will make up IoT by 2025?
Until recently, the importance of easily and efficiently delivering software and firmware updates to IoT devices has been undervalued. But as these devices proliferate in cars, factories, farms and any number of other environments, keeping them up to date to either add functionality or ensure security is a growing and important priority.
Challenges of IoT software updates
Delivering software and firmware updates to IoT devices comes with two significant challenges. First, if an enterprise has hundreds or thousands of IoT devices deployed in the field, it is impractical — if not impossible — to perform updates manually. Second, when executed remotely, pushing out software updates that can range from a few to a hundred megabytes or more can constrain bandwidth and drive up costs, especially when they’re performed by cellular uplinks.
Organizations must find answers because it will always be easier and less expensive to improve through updates rather than replace IoT devices. And because a vulnerability in just one device can threaten the entire network, it is essential to constantly provide every device with the latest security protections.
Innovations in IoT software updates
Fortunately, technology has emerged to eliminate manual updating while also addressing the bandwidth issue. And it’s being successfully applied in the field by certain manufacturers, such as Cyberdyne. The company has attracted worldwide attention for its HAL, a wearable, cyborg-type robot that helps people with spinal cord injuries and other disabilities regain movement.
Cyberdyne also makes an advanced cleaning robot, called CL02. Leveraging AI features and able to work without guide wires or magnetic tapes, the robots can record building layouts and map out cleaning routes dynamically and detect obstacles and ensure safety utilizing built-in 3D cameras.
To keep the robots’ software up to date efficiently and cost effectively was a major concern for Cyberdyne. The company assessed three options: sending service engineers all around Japan to client facilities; recalling the robots to the company headquarters; or remotely updating each individual robot.
The first option would have been labor-intensive, slow and costly. The second was impractical because it would take robots — each of which is expected to have a five-year service life — out of service for an undetermined period. The third would raise bandwidth and scalability concerns.
Cyberdyne decided to use technology that seamlessly rolls out software updates to its Linux-based robots in the field without engineer intervention and zero downtime, allowing the company to continue expanding its robot fleet while controlling operational costs. With this technology, containerized software packages that bundle everything the robots need — applications together with all their dependencies — are automatically downloaded and installed in the robots.
A major innovation, a form of data compression takes place where only code that has changed is transmitted to the robots, rather than the entire software package. For Cyberdyne, that can mean the difference between 500 MB or just 20 MB per robot, which is an enormous savings of time and bandwidth.
Bug fixes and new features are now delivered with unprecedented speed. And Cyberdyne’s customers don’t have to worry about robot software maintenance interrupting their cleaning schedules. Cyberdyne is even able to publish updates through an app store that enables their customers to update their robots according to their own timetable and cleaning schedule.
As Cyberdyne’s experience demonstrates a mechanism for software updates in the easiest, fastest and most cost-effective way possible is a critical component of a successful and long term IoT strategy. As the number of IoT devices continues to boom, it’s a topic the industry will increasingly be talking about as its looks to solve the problem of IoT software updates.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
When it comes to IoT, the potential for our lives, businesses and industries to completely transform and improve has never been greater. From fully connected homes, hospitals and cities that collect and share data, the influence of IoT is endless.
Research predicted that by 2020, there would be 20 billion connected IoT devices. Now, figures state that by 2025 there will be over 41 billion devices connected, according to International Data Corporation 2019. What this means is that the amount of data being collected — and potentially the high volume of low value data collected — is becoming truly overwhelming.
Though 5G and edge computing will make it possible to transmit data through high powered, low latency networks, these technologies are in their infancy with few use cases. Network infrastructure challenges mean that 5G is still some way off and if we are going to be doubling the amount of connected IoT devices, 5G will struggle to deliver unless organizations can quickly begin to understand the value that is locked within the data being transferred.
5 tips to help achieve value from IoT
The question then becomes this: Why does all IoT data need to be sent back to data centers? If organizations want to become truly data driven and be able to have IoT data move their business forward, they must begin by having a concise business strategy in place, teamed with supporting technology.
Define the challenge the business is trying to solve. Just because an organization has connected devices, it doesn’t mean that they’ll necessarily get value from them; value lies within the data. There will be those in the company saying ‘we need IoT’, but organizations must consider what questions the data from IoT devices need to answer to help the business grow, improve processes and enhance the customer experience. Organizations must also ask themselves what insight does the business want to have from monitoring each device, and what does the outcome of the investment need to be?
Less is more. With so much data at their fingertips, organizations must start moving away from the idea of having every device connected and the ‘let’s store everything just in case’ mindset when it comes to their data. They need to be able to work out what is good and bad data. If all a business has coming back from devices is low value data and low hanging fruit, it will not benefit.
Be savvier with analytics. Organizations need to become more strategic with how data is analyzed, not just in terms of getting insights but looking at the data itself and working out where the inherent value lies within it. When data can be analyzed down at the edge, only the most valuable data collected will be shared and in real time, making the process more cost effective to the business. Additionally, the amount of data being collected from connected devices puts a great deal of pressure on a cloud network. If organizations want to be sending data to the cloud, sifting through all of it at the edge with edge analytics will ease the load.
Don’t pin your hopes on 5G. While there is a buzz about the possibilities around being able to send every piece of data back to the data center with 5G, bandwidth is still going to run out really quickly because of the sheer volume of data being generated. If we think about what will happen after 5G, there will need to be another network advance to help solve data issues, which means that organizations need to make changes now rather than wait for the next development.
Have the precise skill set. An organization might spend a considerable amount on IoT and other emerging technologies and be able to retrieve a lot of data, but if they don’t know how to get value from it, it’s pretty much worthless. Due to the complex nature of any IoT deployment, there will be a requirement for specialist skills and expertise.
Working with device suppliers will help organizations to understand data formats and to work out what data is actually needed to deliver insights to meet the defined strategy. Edge computing will also have a big part to play in sending the right data to tools and technologies that can be accessed by individuals who can make a difference for the business.
Imagine IoT without Wi-Fi, Bluetooth or Zigbee. Anyone who would want to connect one device to another — much less to the cloud or a central server — would require bundles of ethernet cables. Such a cord-heavy setup would quickly become unmanageable. Similarly, what if smart speakers had to be connected via ethernet rather than Wi-Fi? While they would still offer the same functionality, it’s doubtful they would be as popular considering they would be tethered at close range to an internet outlet.
It’s safe to say that IoT never would have caught on were it not for the ability to connect wirelessly. But what if what’s true for the ethernet cable can also be true for the power cord? What if today’s mobile and IoT products went one step further and became truly wireless? Imagine never having to worry about connecting today’s devices, such as mobile phones and smart speakers, to a power cord or replacing batteries in order for our devices to stay continually charged?
We’re right on the cusp of this game-changing evolution that I call “cord cutting 2.0.” And it’s only a matter of time before wireless charging technology becomes mainstream, enabling IoT and smart devices to be even more powerful, versatile, and mobile than ever before.
A brief history of cord cutting
Cord cutting began for both symbolic and literal reasons. The initial phase came about due to the ubiquity of Wi-Fi, which launched a movement that brought devices together, improving mobility and accessibility for consumers. Popularity in music streaming was the primary reason behind it.
With 76% of U.S. homes now using Wi-Fi and it becoming faster and more common, that number will only continue to increase, according to a Security Sales and Integration study.
Cord cutting 1.0
Tired of exorbitant pricing and stringent contracts, consumers also began canceling cable TV service in favor of streaming TV platforms. 50 million consumers are expected to have ‘cut the cord’ with cable by 2021, according to a entertainment forecaster eMarketer study. Comcast reported a loss of 2.1% of its cable subscribers compared with the previous year, according to a Comcast earnings call conducted in October 2019.
This is firsthand evidence that consumers are ditching traditional cable in favor of streaming services. We’ll call this “cord-cutting 1.0.”
Cord cutting 2.0
Current smart devices are connected via Wi-Fi, which in today’s standards is about as wireless as they can get. But there is still one cord that remains and has yet to be cut: the power cord.
Whether it is to charge a device battery by directly connecting to a power supply, or needing to keep a device constantly plugged into an electrical outlet to work; all of our devices must be plugged in at some point in order to function and maintain their charge.
This is where the next phase of the cord-cutting revolution — cord cutting 2.0 — will happen.
And the catalyst for this revolution is long-range wireless charging. Already available for integration by device manufacturers, this wireless power delivery technology will enable consumers and businesses alike to finally cut the power cord and go completely wireless.
How does cord cutting 2.0 work?
Long-range wireless charging promises to create new and exciting user experiences. Consumers will no longer have to worry about battery life and will be free to use their devices for long periods of time without having to plug their devices into an outlet to recharge or bring along a power pack.
A promising approach to long-range wireless charging relies on infrared light to deliver energy from a transmitter to a device. Infrared light provides several benefits:
- Extended range: As long as a device is in view of a wireless transmitter — even at 20 feet away — it will charge from essentially anywhere in a home or office.
- Safety: Infrared light has significant safety advantages. Power is only delivered to the device being charged and not to anything in the surrounding environment. Infrared light is abundant in sunlight and in nature. In fact, some say it is nature’s preferred way of energy delivery; living organisms are already well adjusted to it.
- Faster charge: Infrared light signals can deliver focused energy and charge devices more quickly. This means that in most instances, a consumer could use their device continuously while it’s charging and suffer no power loss.
Infrared light charging is not the only approach that could potentially deliver cord cutting 2.0. Improvements in battery technology have resulted in longer-lasting batteries and faster recharges. Small solar panels, such as the ones that used to be on calculators, can provide a small amount of energy indoors. There’s also RF charging, which can be used to transmit energy from a wireless charger to smart devices, though it carries significant power limitations and safety concerns.
What are the benefits of cord cutting 2.0?
IoT and smart devices help homes and offices discover efficiencies, lower costs and improve the overall quality of life. That is until the batteries in a smart home lock need to be changed or a home security camera needs to be charged.
Embracing cord cutting 2.0 can ensure users’ devices are charged constantly and efficiently. For businesses and homes, this creates two essential benefits:
- No batteries or cables: When consumers don’t have to worry about batteries or cables, they can place their devices anywhere for any amount of time. No more tripping over unsightly power cables or fighting over outlets for chargers.
- More flexibility and innovation: With long-range wireless charging, developers can introduce features that may have previously been scrapped due to power constraints. For instance, a wireless security camera could offer video streaming or perform power-hungry face recognition and AI functions in the camera. Additionally, users will have more flexibility in the placement of devices, making all IoT networks more mobile and dynamic.
On the verge of a new revolution
Long-range wireless charging promises to supercharge the potential of IoT networks. Just like Wi-Fi eliminated the data cord and streaming TV eliminated the cable cord, long-range wireless charging will eliminate the power cord.
Cord cutting 0.0 and 1.0 didn’t happen overnight, and neither will cord cutting 2.0. But forward-looking device manufacturers and visionary executives should start thinking about the opportunities and implications of long-range wireless charging now.
Once cord cutting 2.0 has been fully embraced in offices and homes around the world, businesses and consumers alike will be able to cut the last of their cords and go completely wireless. Offices, homes and the IoT will never be the same.
The IoT data pipeline is a simple concept. First, data is gathered from multiple sensors, then all data is transformed into a common format, followed by storing it in data lake storage. Once the data is stored in an accessible place, analysts can use it to come up with answers to business problems. Seems simple, but it is not always easy to execute. On average, the completion of a pipeline request ranges from weeks to months, resulting in loss of business and increased IT costs.
The most common pattern observed in IoT data pipelines is where enterprises use data lakes to store all their data. However, lack of data accessibility and increased complexity of these pipeline architectures rapidly turns data lakes into data swamps.
Organizations often react to data swamps by copying their data from data lakes into data warehouses using fragile and slow ETL jobs, doubling storage costs to maintain these copies of data. Then, data engineers must create cubes or BI extracts so analysts can work with it at interactive speeds. In this scenario, enterprises don’t have full control of their data. Analysts don’t know where their data is located, and there are dozens of data engineers trying to keep things running. What can we do to make things better?
Use a data lake engine to leverage the value of your IoT data
Data lake engines are software solutions or cloud services that provide analytical workloads and users with direct SQL access across a wide range of data sources — especially for data lake storage — through a unified set of APIs and data models. While it may sound somewhat hard to believe, data lake engines eliminate the need for data warehouses on top of your data lake storage because the data lake engine allows users’ BI and data science tools to talk directly to the data stored in the data lake. Data engineers and architects can leverage data lake engines to present BI analysts and data scientists with a self-service semantic layer that they can use to find datasets, write their own queries against them, and use them in their end user applications. This process is a whole lot simpler and faster since enterprises can eliminate complex ETL code as well as slow and costly data replication.
Key elements for simplifying your IoT pipeline
The basic pipeline scenario works well in small IoT environments where only hundreds of rows of measurements are captured daily, but in large industrial environments, where hundreds of gigabytes of data are generated every hour, the story is different. In such large-scale scenarios, we need to worry about how many copies of data there are, who has access to them, what managed services are required to maintain them and so on. Additionally, the traditional methods of accessing, curating, securing and accelerating data break down at scale.
Fortunately, there is an alternative: a governed, self-service environment where users can find and interact directly with their data, thus speeding time to insights and raising overall analytical productivity. Data lake engines provide ways for users to use standard SQL queries to search the data they want to work with without having to wait for IT to point them in the right direction. Additionally, data lake engines provide security measures that allow enterprises to have full control over what is happening to the data and who can access it, thus increasing trust in their data.
IoT data pipelines have to be fast. Otherwise, by the time the data is delivered to the analyst, it runs the risk of being considered obsolete. Data obsolescence is not the only issue; increased computing costs and the prospect of data scientists leaving because they don’t have data to analyze are also potential issues. Data lake engines provide efficient ways to accelerate queries, therefore reducing time to insights.
IoT data comes in many different shapes and formats. Because of this, the efficiency of a data pipeline is decreased due to the number of transformations that data has to go through before being put into a uniform format that can be analyzed. Take data curation, blending or enrichment as an example. This is a complicated process that entails using ETL code to extract fields from a source, then through other database connections, extract the fields that we want to blend from another source. Finally, the resulting dataset is copied into a third repository. Data lake engines alleviate the complexity of this scenario by allowing users to take advantage of a self-service environment, where they can integrate and curate multiple data sources without the need to move data from its original data lake storage source.
IoT data pipelines have been around for a while; as they age, they tend to be rigid, slow, and hard to maintain. Making use of them normally entails significant amounts of money and time. Simplifying these pipelines can drastically improve the productivity of data engineers and data analysts, making it easier for them to focus on gaining value from IoT data. By leveraging data lake engines, enterprises can embark on a more productive path where IoT data pipelines are reliable and fast, and IoT data is always accessible to analysts and data scientists. Data lake engines make it possible for data-driven decisions to be made on the spot, increasing business value, enhancing operations, and improving the quality of the products and services delivered by the enterprise.
Botnets continue to plague IoT devices, resulting in a range of criminal activity from denial of service attacks and dropping malicious payloads such as ransomware to hijacking unused IoT device CPU cycles for things like crypto mining. One of the most interesting aspects of botnets is their longevity and persistence.
Botnets persist quarter after quarter
According to Fortinet’s most recent threat landscape report, today’s top botnets tend to carry over with little change from quarter to quarter or from region to region, more so than for any other type of threat. For example, Mirai –active since 2016 — still sits in the top five of the most prevalent botnets identified in Q3 of 2019. That provides an interesting window into modern cybercrime, especially given the damage caused when Mirai was first released.
First, it suggests that the underlying control infrastructure is more permanent than any particular tools or capabilities. This is due, in part, to the fact that the traffic to and from IoT devices in many organizations is not being identified or tracked. As a result, communications back and forth from compromised IoT devices and their criminal control systems tend to continue uninterrupted. As the saying goes — as least as far as these cybercriminals are concerned — “if it ain’t broke, don’t fix it.”
One of the reasons botnets remain a common issue is that the OSes of many IoT devices cannot be patched or updated. This means that if a connected IoT device is vulnerable, it is at risk of being exploited. Because IoT communications traffic is not being tracked, too many organizations have little to no idea that the IoT devices attached to their networks pose a risk.
Perhaps most importantly, the prevalence of botnets indicates that far too many organizations either do not understand the risk that compromised IoT devices represent or simply feel that there is little they can do to protect themselves. Of course, even if deployed IoT devices can’t be patched or upgraded, there are plenty of things organizations can do to reduce the risk that such devices introduce. This begins by adopting a strategy that some cybersecurity professionals refer to as zero trust network access.
Steps to secure connected IoT resources
The basic idea is to assume two things. The first is that every device on your network, including your IoT devices, may have already been compromised. The second is to assume that users cannot necessarily be trusted and can be spoofed. As a result, the ability to see and communicate with connected devices needs to be explicitly authorized and strictly controlled. Achieving this zero trust network access includes the following elements:
Multi-factor authentication (MFA): Users need to validate themselves to the network using MFA before they can access, deploy, manage or configure any device anywhere on the network.
Network access control: Any device seeking access to networked resources — whether inside or outside the network — needs to go through a network access control system. This ensures that devices are identified and authenticated based on several criteria and then dynamically assigned to predetermined segments of the network.
Intent-based segmentation: Dividing the network into functional segments is essential to manage today’s expanding networked environments and to limit the damage caused by a compromised device or rogue user. By interfacing with a next-generation firewall, segments can be dynamically created based on the business objectives of devices seeking access to networked resources.
Inventory management: One of the Achilles’ heels of an IoT-based infrastructure is that many organizations have lost visibility into what devices are connected to their network, where they are located, or what other devices they can communicate with. Inventory management is essential in keeping track of your IoT devices and can be connected to your network access control system and segmentation solutions to know what devices are actively connected to your network and where in your network they have been deployed.
Threat intelligence: IT teams need to be able to map ongoing threat information about active compromises and vulnerable systems to existing IoT inventory. This mapping process enables network administrators to prioritize things like patching devices that support that process and to strengthen proximity controls and segmentation rules for devices that can’t be updated.
Behavioral analytics: Finally, a system needs to be put in place that can baseline normal IoT device behavior and then alert on anything out of the ordinary. For example, digital cameras broadcast specific types of data to specific destinations. But they should rarely if ever request data, and they should never transmit any data to other devices or destinations. And if they do, your network should immediately recognize such unauthorized behavior, quarantine the device and provide an alert to a systems administrator.
A security-first networking strategy is the best place to start
IoT devices have become an essential component of any organization looking to succeed in today’s digital marketplace. However, malicious actors continue to aggressively target these devices because they tend to be easy to exploit, and once they have been compromised they tend to remain compromised. Organizations that increasingly deploy and rely on IoT devices — especially as they begin to develop complex, integrated systems such as smart buildings — need an effective strategy in place to see, monitor, control and alert on every connected device in their digital infrastructure. That begins with an integrated, systemic approach that ties critical security and networking systems together into a single, security-driven networking strategy that can enable and ensure zero trust network access.
We live in an ever increasing digital society with an always on mentality. Cities and governments have the social responsibility to bridge the digital divide that’s occuring and ensure internet access to all their citizens. The private sector focuses on delivering a return for investors while citizens are expecting improvements in their quality of life.
To achieve these goals, we have to come up with a new way of doing business. We need ‘Uber-like’ innovation to completely rethink and disrupt the current status quo and the business-as-usual methodology. Let’s look at four examples of how IoT can drive innovations in our society when put to the task:
Government. Many cities are rolling out digital governmental services. For example, Dubai enables citizens to attend traffic court using a video call on a smartphone. This makes the process more efficient because people don’t have to wait for their turn in the courtroom and it also eliminates the journey to the court building, which can reduce traffic congestion.
Healthcare. The convenience of online doctor consultations, sometimes referred to as telemedicine, not only extends medical care to more people around the globe in a faster, more efficient way, but also better utilizes the resources of healthcare specialists. Personal IoT devices, such as smart watches, can now track health statistics 24/7, alert users to abnormalities, and signals early warning signs to users and healthcare providers. There are several documented cases of smart watches notifying users of a medical condition that saved the person’s life.
Education. The digital education experience allows users to tailor their classroom setting and content delivery to best suit their needs and provide them with instant access to the best resources on a global scale. Without having to travel, students can learn from the best minds around the world starting in grade school all the way through higher education. O f all students in the U.S. taking at least one online class, 48% were taking only online classes, according to a study conducted by Allen and Seaman.
Hospitality. Hotels are rolling out an IoT-based, automated digital experience for customers to get room access, control lighting and HVAC, and access entertainment through their smartphones. This not only improves the user experience, but also the retention rate and customer loyalty.
The “prevailing drivers of smart hospitality building deployments appear to revolve around making the experience within hospitality buildings more convenient for guests and improving the operational efficiency of the hospitality building with respect to those using the building either as a guest or third-party business,” according to a recent iGR report.
How do we get there?
IoT depends on reliable, high-speed communications infrastructure. If we have reached the point where customers expect ubiquitous broadband services, we need to leverage all technologies to deliver a seamless experience. This means converging wireless and wireline networks and ensuring that these networks are available wherever needed.
For example, the next generation 5G millimeter wave network will require a tremendous amount of backhaul fiber, and we cannot build this network the same way we deployed the 3G, 4G or LTE networks of the past. Due to the densification requirements — predicted at roughly 100 times that of 3G networks — we need to rethink the way we will deploy these networks. Shared infrastructure, such assmart poles, digital street furniture and existing utility and cable company access, all need to be part of the discussion.
As we start preparing for the electrical vehicle evolution, we need to use this as an opportunity to be smart about preparing our streets for the next generation of IoT applications. Most of the world’s leading car manufacturers have already started rolling out plans to convert their automobile offerings from internal combustion to battery-operated vehicles.
The EV charging infrastructure required to put these vehicles on our roads will be of epic proportions, and a lot of new construction will offer us the opportunity to leverage this work and prepare for a connected future. Think of the city street where you park your car every day: What upgrades need to take place to build charging stations or even future inductive charging infrastructure there?
When digging up the streets, we can add extra capacity by adding conduit and fiber to the shared infrastructure and incorporate these charging stations to smart poles or digital kiosks.
Another requirement will be incorporating AI technology into new applications. AI can use data from existing or multiple new sensors and combine these inputs to derive more insightful outcomes.
For example, some companies use the public Wi-Fi network in Metro stations to gather crowed heatmapping to better understand commuters’ habits and improve train schedules without the privacy concerns associated with video analytics.
Another example of shared infrastructure for IoT is vapor detection sensors inside Wi-Fi access points in school bathrooms to address the vaping epidemic without infringing on privacy rights.
From the above examples, it’s clear that our lives are becoming more digitally connected and this trend will increase exponentially as we ramp hypoconnectivity technologies, such as 5G, IoT and private networks. Connectivity is the key foundation that will enable all these efforts, and society needs to start viewing connectivity through the same lenses as we do our other utilities, such as water, electricity and gas.
Little by little, we are building an always on future and layering on applications that improve peoples lives, but we need to remember that it starts with putting infrastructure in the right places.
Although we haven’t yet seen an ‘Uber-like’ innovation in infrastructure, don’t be surprised if the new demands from IoT applications bring disruption and innovation.
Today, 55% of the world’s population lives in urban areas, which is expected to increase to nearly 70% by 2050, according to the UN Department of Economics and Social Affairs. If smart cities are both required now — and in the future — then IoT technology is the pathway to the connected world of computers and devices that share data with each other to increase efficiency.
IoT technology comes with a set of challenges and obstacles to overcome, but it can create connectivity across disparate assets to maximize efficiencies in ways never previously possible, ultimately changing the way local governments conduct business, handle everyday life and crises, and budget their time and money.
With the right planning and IoT implementation, the varying levels of smart cities can prove successful for municipalities across the globe.
Opportunities of smart cities
What makes a true smart city or community? Having a network of connected devices in and around a city to keep tabs on what’s happening might sound invasive, but the benefits are endless. From regulating the flow of traffic to knowing the exact location of where to send a repair crew to fix a pothole, smart cities are just that: smart.
For example, let’s say a fire breaks out in an office building. Sensors connected to the network will send immediate alerts to a central command center. Those same sensors could even determine the speed at which the fire is growing, how many people are trapped and whether the building is safe for firefighters to enter. This information, coupled with building information modeling, can provide the first responders with the critical structural details, such as the location of water valves, gas lines and air ducts, to effectively manage the situation and minimize damage as much as possible.
Furthermore, the smart grid would help allocate the proper resources and reduce the risk to everyone involved. For example, geo-location tools can change traffic lights along the route to control the flow of traffic, enabling first responders to reach an incident location as quickly as possible.
IoT could also create a data history of the number of cars on the road at any given time and how many passengers are taking specific trains at various times throughout the day. Knowing this, the system could make predictions with amazing accuracy that could then trigger other smart systems, such as traffic lights and train switches, to work in sync to keep everyone moving and the city running.
IoT “provides the senses and nervous system, along with some of the muscles needed to really deliver on the smart city promise,” said, David Mudd, the Global Digital and Connected Product Certification Director at BSI.
“The ability to know exactly what is going on in real time across all aspects of the city’s infrastructure and how it is being used, and act on this information with minimal human interaction, has amazing potential to improve the quality and efficiency of services,” said Mudd.
Challenges to overcome
Like any new technology, the use of IoT requires overcoming some challenges. However, there are tools and resources, such as the international standard for sustainable cities and communities (ISO), that can help guide the development and implementation of smart cities.
Not surprisingly, at or near the top of the list of smart cities related concerns are data security. How safe is the data that’s being collected? Where is it being stored? Is it encrypted? One worry is that criminals could worm their way into the protected network through any IoT device, such as smart TVs, thermostats or even light bulbs. IoT devices and their data must be protected, and information security must be managed.
Another challenge is that a city could spend millions of dollars on a system that either doesn’t work as designed or doesn’t work at all. Imagine the catastrophic frustration that could occur in an urban environment if there was a denial of service on the traffic light system during rush hour. The system must work for protection to occur or for efficiencies to be realized.
“If you can’t trust the product to work as it should or trust the data it produces, then connectivity is an expensive, and possibly dangerous, waste,” said Mudd.
It is also important to consider that getting all the connected devices to work in harmony with each other will likely take some time. The sheer volume of data sets and devices, all of which require varying times to update and process information, will lead to hiccups. There could also be additional costs to remedying any problems related to this, so city governments need to be prepared and budget properly.
Finally, a detailed contingency plan must be in place for dealing with various incidents and scenarios, along with clear directions around who owns and manages the data being collected.
Solutions to common smart city challenges
A city connected through a network of devices, both wired and wireless, has endless benefits. And despite the challenges that will arise, ISO has a framework to overcome them. Recognizing that no two smart cities are the same, ISO 37106 offers a citizen-centric approach that prioritizes the needs of the community first, and offers all stakeholders a guide to help operationalize the vision, strategy and policies needed to become a smart city.
In addition, municipalities should do their homework and research the companies that offer smart solutions. During the implementation process, it is critical to meet with stakeholders in the project, including citizens all the way up to local, state and perhaps even federal leaders.
An important part of the process is listening to what everyone’s concerns might be, creating a list of action items and coming up with solutions. How transparent will you be with the data? How will law enforcement use it? Will the data be stored indefinitely, and how safe will it be? These questions will need answers.
Municipalities must “independently verify that a product or system will work as it should, safely and securely throughout its intended life,” said Mudd. Municipalities must also provide advice and training to stakeholders regarding best practices.
The cities of tomorrow have the potential to efficiently maintain themselves with minimal human input and take a lot of risk out of the equation thanks to IoT. There’s no better time than now to start the planning process.
Connected RPA provides a collaborative platform for humans and automated Digital Workers to deliver business-led, enterprise-wide transformation. Although connected RPA is a business-led technology, it can end up out of control if it’s treated solely as a pure business project. However, it’s not a pure technology project either. If it’s treated as an IT project, it will end up with too much governance and too much control, which prevents it getting done well or at all. The key is to find a middle ground between delivering business- led and IT-endorsed projects.
To ensure that successful outcomes are achieved with any connected RPA program, we created an industry standard delivery methodology that’s broken down into seven pillars: vision, organization, governance and pipeline, delivery methodology, service model, people and technology. The general principles involve putting a structure behind how to identify, build, develop and automate processes. In this article, I’ll discuss what each pillar encompasses and the various considerations required when embarking on a connected RPA journey.
It’s important to first establish the reasons why a connected RPA program is being undertaken and align these reasons to corporate objectives. For example, Teleco’s drive for connected RPA included improving customer satisfaction, operational efficiency, process quality and employee empowerment. Naturally, key stakeholders need to be engaged to gain their backing. If they see connected RPA as a strategic business project, they’ll champion it and help provide the requisite financial and human resources.
Although connected RPA is managed by a business team, it’s still governed by the IT department using existing practices. Therefore, IT must be involved from the start because they can support connected RPA on many critical fronts, such as compliance with IT security, auditability, the required infrastructure and its configuration, scalability and prevention of shadow IT.
This next stage involves planning where connected RPA sits within the business so it can effectively scale with demand. A centralized approach encompasses the entire organization, so it may be beneficial to embed this into an existing connected RPA center of excellence (CoE).
Another approach is the federated set up where the connected RPA capability sits within a particular function but is scaled across the business with the central automation team controlling standards and best practice. This is achieved by creating delivery pods throughout the organization, responsible for identification and delivery of their own automated processes governed by the CoE.
However, a divisional approach is one route that must be avoided. This is where multiple RPA functions run separately across the organization with differing infrastructure, governance and delivery teams. This siloed set up is not cost effective and will prevent true scale ever being achieved.
Governance and pipeline
The next stage is identifying process automation opportunities that will generate the fastest benefits. It’s important to be clear about what makes a truly good process. For example, selection criteria could include targeting those standard processes with a high workload or several manual and repetitive tasks, processes that have quality issues related to human errors or those that require customer experience improvements.
Even if the ideal process for automation has been found, businesses should collaborate with IT to ensure that there isn’t any maintenance planned for the target application. To guarantee the traceability of the automation program, a set of indicators should also be defined, such as financial, process, quality and performance related KPIs, prior to any activities taking place.
Other considerations include how to generate demand for connected RPA within the business. This could involve providing employee incentives for identifying suitable processes, internal communications and running workshops for engagement.
Once these processes have been selected, they need to be delivered as automated solutions. This means capturing correct information in the define phase to avoid problems, which requires knowledgeable subject matter experts to be involved.
It’s also worth holding a process walk through for the right audience. Each chosen automated process must be documented and an understanding gained of how it will differ from the same human process. Once this has all been agreed with the business, and the process design authority has approved the proposed blueprint and conducted the necessary peer reviews, development can begin. Once the business is satisfied, sign-off testing can start.
As processes are in production, they need the right support around them. Businesses must ensure that Digital Workers are handing back business referrals or exceptions to the operational team for manual intervention, and that a technical capability is readily available in case the Digital Workers don’t act as expected. Ultimately, to ensure smooth continuity and availability of automation resources, there must be a robust IT infrastructure.
Appointing a high-quality, lead developer is essential, but people with these skills can be difficult to find in the current market. Developers need to be trained to the highest standard in both connected RPA development and process analysis. This will enable them to perform a hybrid role while receiving on-site support from an experienced consultant. As the development team continues to grow to ensure automation standards are maintained, businesses should appoint a design authority and a control room monitor to manage Digital Workers in production.
There are several technical approaches to examine when deploying a connected RPA platform. For example, considerations may include if it’s a virtual machine set up, how to manage license provisioning for future scaling as well as choosing between a cloud-based solution or on-premise hosting of the platform.
It’s clear that to gain the best results with connected RPA, the complete journey must be defined upfront rather than waiting for mistakes and then correcting them. Once company-wide support is gained and a vision of desired results created, it’s best to start small. Getting this right enables the program to grow and scale organically rather than stagnating.