To begin, let’s take a look at the definitions of IT and OT separately. Information Technology (IT) refers to the “entire spectrum of technologies for information processing, including software, hardware, communications technologies and related services. In general, IT does not include embedded technologies that do not generate data for enterprise use.”
Operational Technology (OT), a relatively newer term, is explained by Gartner as the “hardware and software that detects or causes a change through the direct monitoring and/or control of physical devices, processes and events in the enterprise.”
Traditionally, IT and OT have had separate roles within an organization. However, this is changing with the internet of things and particularly with the industrial internet of things. IIoT is a network of complex physical machinery with embedded sensors and software, thus blurring the lines between the IT and OT realms.
One of the main reasons these industrial systems and appliances are being brought online is to deliver smart analytics — using data generated from the machines to modify and optimize the manufacturing process. Generating data for enterprise use? That’s starting to sound more like traditionally IT territory. Another major use-case is that of predictive maintenance — which is outlined in greater detail below.
OT is mission-critical IT
As the industrial internet grows beyond the historically closed systems, and at an unprecedented rate, so comes an even greater interdependence and overlap between the two teams and a myriad of new security concerns. A key factor connecting the two is predictive maintenance, essentially the first major application of IoT in OT. Predictive maintenance is the instrumentation of machinery with IP-enabled sensors to monitor any anomalies or changes in behavior with a view to preempt mechanical failures, reduce downtime and ultimately save operational costs. This requires many things to talk to each other, for example an IoT device to a gateway to edge device to asset management software to an ERP system, etc. A data scientist sitting in a research facility can now predict whether the bearing on a fan is vibrating beyond its range and send in a crew to fix it. Earlier, an engineer sitting in a plant halfway across the world could never diagnose this. Or consider space agencies preparing for a rocket launch, where all the teams have tight integration among them.
Security concerns on both sides
The expanded network that the industrial internet of things is creating obviously has a vast number of benefits, particularly for smart analytics and control, but unfortunately it also opens up connected devices and systems to significant vulnerabilities and increased risk of devastating cyberattacks. Both IT and OT have always had security as a priority — these networked systems are presenting never seen before scenarios and risk profiles for both sides.
Key concerns for IT
Greater scope of impact: There’s no downplaying the obvious detrimental results of a security incident in a more traditional enterprise environment, but the effects of an incident on an industrial system are on a completely different scale. Consider the repercussions if an electricity grid went offline, or if a car’s engine control system was hacked and drivers were no longer within complete control.
Physical risks and safety: Unlike more traditional enterprise systems, networked industrial systems bring an element of physical risk to the table that IT teams have not had to think about. An interruption in service or machine malfunction can result in injury to plant floor employees or the production of faulty goods, which could potentially harm end users.
Outdated or custom systems: IT is used to frequent and consistent software patches and upgrades, but the industrial environments tend to be more systemic, where one small change can trigger a domino effect. As a result, many legacy plant control systems may be running outdated operating systems that cannot easily be swapped out or a custom configuration that isn’t compatible with IT’s standard security packages.
Key concerns for OT
Physical risks and safety: Threats to physical safety are not a new concern to OT teams; they’ve been implementing safety measures into industrial systems for decades. However, they’re now facing threats that are potentially outside of their control. Taking machines and control systems out of a closed system brings the threat of hacked machines, which could potentially injure employees (e.g., overheating, emergency shut-offs overridden, etc.).
Productivity and quality control: Losing control of the manufacturing process or any related devices is any OT team’s worst nightmare. Consider a scenario where a malicious party is able to shut down a plant, halting production entirely, or reprogram an assembly process to skip a few steps, resulting in a faulty product that could potentially injure end users down the road.
Data leaks: While data breaches have long been a top concern for traditional IT teams, they are somewhat new territory to OT teams that are used to working with closed systems. Given the nature of the types of industrial systems that are coming online, such as utilities, aviation and automobile manufacturing, ensuring the privacy of transmitted data is critical.
Working with IT: One of the more unexpected concerns I hear from OT teams is around how to work with IT to solve the security threats discussed above, when IT teams generally have little experience with industrial systems and their traditional security solutions typically aren’t compatible with legacy control systems. While many on the OT side see the benefits of moving away from closed systems and increasing connectivity, the perceived lack of IT experience and potential solutions for their security concerns is causing some resistance.
OT and IT collaboration: What does it look like?
While OT and IT may have different backgrounds framing their concerns about the transformation brought about by the industrial internet of things, the main underlying concern for both parties is retaining control of systems and machines and ultimately the safety of their employees and customers.
To make both sides happy, key components of any potential security solutions should include:
- Identifying and authenticating all devices and machines within the system, both within manufacturing plants and in the field, to ensure only approved devices and systems are communicating with each other. This would mitigate the risk of a hacker inserting a rogue, untrusted device into the network and taking control of any systems or machines.
- Encrypting all communications between these devices to ensure privacy of the data being transmitted.
- Ensuring the integrity of the data generated from these systems. As mentioned earlier, smart analytics are a major driver in the adoption of the industrial internet, but those analytics are worthless if the data is inaccurate.
- Assuming the manufactured goods contain software or firmware themselves, enabling the ability to perform remote upgrades down the road and ensuring the integrity of those updates.
It is very likely that things will continue on the path they are on today.
If things continue as they are today, it’s likely we will see the separation between OT and IT continue to fade until they are potentially one and the same. In an industrial setting, efficiency is the easiest innovation one can make, but we have reached the point of diminishing returns. Increasing efficiencies and better processes can only come by the data captured on the OT side and analyzed on the IT side. IT and OT will eventually become buzzwords and be replaced by a simple “T” – technology — in any and all forms. In the meantime, it’s essential that both sides consider the other’s expertise and point of view and work together toward the ultimate goal — a secure, productive industrial internet of things.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
It is commonly considered by the International Telecommunications Union (ITU) that there will be three usage scenarios for 5G mobile broadband communications: enhanced mobile broadband; ultra-reliable and low-latency communications; and massive machine-type communications. The Cellular Internet of Things (C-IoT) may cross over all of the usage scenarios, with a majority of connections falling into massive machine-type communications.
The case of massive machine-type communications is characterized by a very large number of connected devices typically transmitting a relatively low volume of non-delay-sensitive data. Devices are required to be low cost and have a very long battery life, such as five years or longer.
Additional use cases are expected to emerge that are currently not foreseen. For future IMT, flexibility will be necessary to adapt to new use cases that come with a wide range of requirements.
ITU continues to work closely with administrations, network operators, equipment manufacturers, and national and regional standardization organizations to include today’s 5G research and development activities in the IMT-2020 global standard for mobile broadband communications.
Although the needs for these usage scenarios are quite varied, one commonality that is shared is the requirement for standardization. Driving the work for standardization of C-IoT is the Third Generation Partnership Project (3GPP) which has been working on standards for machine-type communications with a particular focus in Releases 13-15. 3GPP plays an important role in IMT-2020 and has had a program underway for developing these new 5G technology solutions for IMT-2020 since early 2015 (see Figure 2), with its initial 5G RAN workshop in September of 2015.
Work in 3GPP on 5G standardization is based on the foundation of LTE through LTE-Advanced Pro for which standardization is running in parallel to development of 5G standards.
Cellular-IoT and Narrowband-IoT are the so called “new slate” approaches adopted by 3GPP in Release 13 that will compete with the low power-wide area arena for providing connectivity in the exponentially growing market of the internet of things services. The Technical Report 45.820 requirements are:
- Reduced UE complexity
- Improved power efficiency
- Improved indoor coverage
- Support of massive number of low throughput devices
With these requirements in mind, the idea was to design a communication procedure that could coexist with current 3GPP systems deployed in the same frequency bands, that was not backward compatible but rather “new slate.” The aim has been to enable the introduction of new IoT technologies as much as possible with only a “software upgrade” to the current 3GPP Radio Access Network nodes. C-IoT focuses on GSM/GPRS technology (currently the leading M2M technology in many countries due to its low price and good coverage) while NB-IOT focuses on LTE technology.
Therefore, C-IoT was designed to share radio resources with existing GSM/GPRS systems using the 200 kHz channelization and to a large extent re-using its design principles.
C-IoT addresses the reduced complexity of adopting a half-duplex 200 kHz connectivity that enables the single chip solution that could easily integrate the single RF chain needed. C-IoT also achieves up to 20 dBs increase in the MCL (Maximum Coupling Loss) compared with GSM using blind repetitions, and modifications on the control channels.
On the other hand, the NB-IOT has been designed to be integrated in LTE eNodeBs, with three types of deployments envisaged:
- In-band, integrated as part of the resource blocks regularly used for the eNB communication
- Guard band, using 180 kHz of the unused frequency band between the last physical resource block (PRB) used and the channelization edge
- Standalone system in any assigned band, potentially re-farmed channels from previous GSM/GPRS systems owned by the operator
In order to achieve aggressive requirements for IoT services, a major modification on the LTE physical layer is needed as its initial LTE design was for mobile broadband. NB-IoT operates in one single PRB (i.e., 180 kHz) in order to guarantee that these communication devices — named as user equipment (UE) category (Cat.) NB1 — could be a single chip solution including the RF components. However, LTE has been designed for UEs with at least 1.4 MHz bandwidth and is therefore capable of receiving and processing 6 PRBs per slot. Therefore, in order to fully operate a single PRB a substantial redesign of many physical layer aspects — including data and control channels — is needed. Since these physical channels should be contained in a single PRB, they need to be spread over time in several sub-frames. In fact, for the provision of NB-IoT a total of eight new channels and reference signals have been defined.
Therefore, legacy LTE control information is not used, and the new LTE NB-IoT deployed in-band can be considered as a parallel RAT, using current eNB radio resources, with the advantage of reusing the same hardware and, of course, the same RF components and antennas, which can be software upgraded.
UE Cat. NB1 are based on much lower bandwidth usage (just one PRB, i.e., 180 kHz) than any other UE Cat including UE Cat. M1, and Half-Duplex operation for FDD, therefore present much reduced complexity integrated in a single chip (including the full RF with Power Amplifier required for the provision of the 23 dBms) with the trade-off of lower data rates.
In order to enhance the 3GPP system for the provision of IoT communication between IoT-UEs and service provider platforms, architecture modifications have been introduced. The targets are to adapt the current 3GPP system to the new IoT requirements (low energy consumption with small and infrequent packet transmissions) and also include new services as subscriber management, security, control plane device triggering, etc. This work continues in Release 14.
Standards will create the framework for the future connection of a system of “all things” connected.
For more information, read Inside 3GPP Release 13: Understanding the Standards for LTE-Advanced Enhancements: 2016 Update.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Last week I participated as a panelist to a webcast on IoT maintenance and upgrades; I thought I’d write a post on the subject matter.
The panel was organized in three topics for each of the distinct phases of an IoT project: development, deployment and exploitation in the field of devices.
For the first topic, we look into how IoT devices are built. There are three important design aspects to consider when planning for field maintenance and upgrades of IoT devices:
- Design for maintenance, avoiding monolithic software architecture so developers limit impact of fixes in the whole software and ease maintenance with modular components. This implies selecting a component-driven software development environment and system runtime environment that support modular software construction and update.
- Enabling preventive maintenance, i.e., providing a software runtime environment that is more secure and robust. For instance, an app that crashes does not crash the entire system.
- Enabling corrective maintenance, i.e., selecting a runtime environment capable of firmware updates, but also flexible partial software (component-level) updates. This corresponds to the mobile OS paradigm where once in a while you do the OS update on your smartphone (and need to reboot), but install/update/uninstall apps independently much more often (no reboot). A secure runtime environment is needed to take care of ensuring trusted downloads, authenticating servers, verifying code, executing safely apps, etc.
For the second topic, we look into how to ease maintenance in the field. There are three points of view to take into consideration:
- End customers: Avoid taking the device to the repair shop or waiting for a technician to come.
- Operations/fleet management: Being able to do remote diagnosis and corrective maintenance, knowing that full firmware updates can be very problematic with IoT due to latency and low bandwidth at the edge.
- Manufacturers: Being able to do fixes easily while the development team is probably busy working on another project.
Corrective maintenance involves putting in place tools for remote monitoring, operations management, logging, downloading or uploading software and utilities. With thousands or millions of IoT devices being deployed, operators need to connect them to a web-based platform in order to manage them. Standards like MQTT, CoAP and LWM2M are emerging for connecting and managing a fleet of IoT devices.
For the third topic, we look into how devices are managed. A complete infrastructure in the cloud or with enterprise servers is required for doing three things:
- Data intelligence: Collecting data from devices, storing and presenting it, and doing big data analytics.
- Device management: Provisioning, monitoring, user account management, device administration, over-the-air updates, customer support.
- Software content management: Whether manufacturers/operators want to do full or partial software updates.
Connecting IoT devices to cloud services isn’t easy as operators need to put in place the proper device runtime environment, protocol/network/security strategy, and services in the cloud.
With IoT being new, it’s likely that device manufacturers and service providers are looking for the proper service/killer app. That’s why it’s important to give them the flexibility to ship devices quickly and adjust content/configuration later depending on markets, regions, options subscribed by customer, etc. Manufacturers need the capability for updating the software content (functionality/apps) on the fly upon any kind of evolution — market-driven or technical. This requires a flexible software environment on the device side that adjusts to a fast moving IoT landscape.
View a replay of the panel discussion here.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
For the last 15 years, cloud computing has been a game changer. It has created efficiencies and increased scalability, giving rise to the “as a service” phenomenon in the enterprise. Cloud computing has certainly become a standard in many IT environments, but as we move into a significantly more connected world where we want to support many new “things” and applications, we are starting to see a greater need for computing capabilities closer to the users and the things. This need is driving the next wave of innovations — something Cisco coined “fog computing.”
Fog computing is a system-level horizontal architecture that distributes resources and services of compute, storage, control and networking anywhere along the continuum from the cloud to the things.
This new capability will fill the technology gaps to meet the new requirements of the emerging IoT applications, many which may not have been possible with a cloud-only environment. This can then help a broader range of industries and consumer sectors increase their abilities to support IoT and other emerging applications, which can include anything from existing and future performance-critical, mission-critical and life-critical applications.
Fog computing success stories
Across vertical markets including transportation, utilities, smart cities, manufacturing, retail, energy, healthcare, agriculture, government and even the consumer space, fog computing has demonstrated a tremendous business value already.
The manufacturing industry is full of prime examples of the power of fog. For example, Lordan, a global thermal-engineering heating and cooling manufacturer, used fog for its manufacturing automation system. Its implementation gave the company the ability to view overall throughput and track mission-critical manufacturing information in real time directly from the production floor, rather than rely on periodical assessments. This resulted in over 600 labor hours saved a month, with direct cost savings just weeks after deployment.
Mazak Corporation, which builds advanced technology solutions, has collaborated on its SmartBox technology. Mazak was looking to provide customers with an advanced, secure data collection system which would run on the network infrastructure of a customer’s factory floor. To do so, the fog application needed to support advanced security standards and real-time analytics. Enter the SmartBox, which utilizes fog computing to enable real-time manufacturing data and analytics from Mazak machines to significantly improve machine efficiency for Mazak’s manufacturing customers.
So how does fog computing work?
Fog computing will provide a standards-based way to distribute compute, storage and application resources, and services closer to the users along the continuum from the cloud to the things. This will be analogous to how TCP/IP provides a universal standard way to distribute packets across the internet. Additionally, fog will provide standards-based ways to manage the lifecycles of the distributed resources and services, to secure systems and applications, to pool together the resources in different fog systems, and, in the cloud, to support applications, to provide APIs for the developer community to create new fog applications, and for fog operators to deploy the applications.
To do so, fog needs to operate on an open architecture with interoperable standards. Since the same customer often needs services provided by both the cloud and the fog, fog should be, in many scenarios, integrated with the cloud to enable a unified end-to-end service platform to provide seamless services to the customer. Some platforms can be used to manage services in both the cloud and the fog. Applications developed for the cloud should be able to work in the fog without modification, and vice versa.
A fog system also needs to be able to communicate with all sorts of endpoints. The fog system can serve as proxies of these endpoints to help connect them to the cloud and perform local processing of the data from the endpoints. The fog system can also serve as the proxy of the cloud to provide services to the endpoints. The reality is that no one company can offer a full fog solution. Fog must be supported by a large ecosystem of innovative companies.
In November 2015, innovators including Cisco, Dell, Intel, Microsoft, ARM and Princeton University launched the OpenFog Consortium to develop an open reference architecture. Another key goal of this consortium is to help the industry learn about the business value of fog computing, and therefore help accelerate market adoption. Since then, the consortium has grown to over 50 members, including not only industry leaders, but also startup technology innovators and research organizations.
There’s a lot of talk about smart cities created with IoT technologies as the future of urban living. Transportation, energy, healthcare, water and waste are some of the government sectors looking at integrating information and technology to make the lives of residents better. Yet, while each sensor and IoT application contributes value in its own way, they’re only individual pieces of the vision of what a smart city can bring.
Most analysts studying IoT and connected cities predict healthy growth. Gartner predicted that smart cities will use 1.6 billion connected “things” in 2016 alone, and IoT deployment in commercial buildings will reach more than 1 billion in 2018.
However, a really smart city goes beyond connected street lights, trash cans and booking a parking space with a smart swipe.
When all of these connected apps and their associated data exist in silos, the full value of the potential insight from this data sits, untouched. City planners and CIOs would do well to think of the smart city itself as an infrastructure service platform for building highly interconnected applications. Instead of an application for the water department and another application for the traffic department, applications interact, leverage insights derived from the intersection of the data generated in each vertical, and build on each other’s value. That’s when a loosely connected series of smart vertical applications actually build a smart city.
Let’s talk about how a connected city can get smarter and smarter.
The city as an operating system
In the smart cities of the future, the city will provide the primary repository for collected data — therefore it’s the city that will provide a platform on which applications can be created that enhance the life of residents.
What’s the advantage? By creating the IoT application creation platform itself and exposing collected data for innovators to build on, the city monetizes its smart city investments by charging for access to valuable data. At the same time, the city is providing a critical technology enabler that will stimulate innovation and enrich the lives of residents. And what city doesn’t want to increase its coffers? Too often smart city initiatives drain city resources instead of replenishing them. To me, a city that makes money as it gets smarter is a truly smart city.
Use case: Traffic management delivers more than tickets
Here’s a use case: Traffic management and enforcement data becomes the foundation of an ecosystem of smart city applications to manage congestion, air quality and promote local commerce.
Traffic enforcement camera data. Many cities have adopted traffic enforcement cameras with the idea of making potentially dangerous intersections safer. All-seeing automated cameras take photos of vehicles entering an intersection on a red light and then a ticket for the vehicle’s registered owner shows up in the mail. It’s proven to be a successful approach to monitoring high-traffic intersections, reducing injuries and improving traffic flow.
However, a city with that traffic data already coming in could derive much more value by building an ecosystem, based on that information. For example:
- Congestion charges
A fee for driving a vehicle during highly congested periods is levied in cities such as London and Singapore. You’re not running a red light, but if you do drive in a heavily congested zone of the city during peak traffic times and your car is picked up by the cameras, then you pay a congestion tax. In London, these funds help support the city’s transport system.
- Promote public transportation and directed parking
Park in the preferred parking garage (instead of on the crowded street), and a smart parking app automatically gives a discount on congestion charges. Residents can also receive incentives if they take available public transportation when traffic is anticipated to be heavy (a game or a concert is scheduled), including reduced fees and discounts on attractions.
- Shop local discounts
The city wants to promote commerce while still managing congestion. Residents receive discounts on congestion charges in targeted commerce zones — whether it’s parking or dining or shopping — if they choose to drive. What a great way to support local merchants! In a store at the point-of-sale system, a smart retail app can connect to the transportation data and provide discounts if sufficient purchases are made.
- Safety and security
In an emergency, the best route to safety can be delivered to those within an area. The designated route can take into account real-time traffic patterns to avoid creating more gridlock.
- Air quality
Real-time sensor data can warn citizens that are affected by allergens and irritants (according to HIPAA requirements) that air quality in a specified area is at a level that can trigger asthma attacks and congestion discounts can be provided if a resident chooses to drive their low-emission vehicle.
These are just a few examples of ways to monetize existing data through an ecosystem of IoT applications.
One of the fundamental challenges of the internet of things is that value can be obtained from data only if you can change the culture and processes of working with data to derive full benefit. Cities and companies may collect massive volumes of information, but that doesn’t mean they benefit from it. The most exiting innovation can only begin when those silos are broken down.
What makes a smart city? Smart cities use data available from connected devices to benefit their citizens. An IoT application (such as traffic enforcement) should be viewed as merely the starting point. Enrichment and monetization of the data is where they journey gets really interesting. An IoT service creation and enrichment platform is a smart way to build — and monetize — a rich IoT application ecosystem that leverages the massive amounts of data collected by the smart city. The result? Better services and quality of life for its citizens.
We participated in an IoT roundtable discussion last month at the ACG M&A East conference in Philadelphia. The roundtable was organized by Michael Morrissey of Philadelphia-based private equity firm Inverness Graham, which was the primary institutional investor in Raco Wireless leading up to its acquisition by Kore Telematics in late 2014. Other participants in the roundtable were John Horn, previous president of Raco Wireless and currently CEO of Ingenu, and Keith Schneider, who most recently served as group president of Verizon Telematics and as CEO of Network Fleet before its acquisition by Verizon.
The discussion covered a range of topics, including:
IoT market sizing (hype vs. reality): The point was made that new use cases are continually being developed in the IoT sector, and much of the growth may come from new applications that we can’t even foresee today. Our own published estimates are for 50 billion connected devices by 2020, reflecting a compound growth rate of 22% annually. We also estimate that $6 trillion will be invested in the next five years, more than total investment in 4G and 5G combined.
How IoT is accelerating the development of the sharing economy: IoT enables a shift from the classic concept of ownership to a rented or shared model. The traditional “sale” event as a one-time transfer of title may become less clear. And information about product usage flowing back to the manufacturer or distributor enables them to take a more active role in the aftermarket cycle.
Opportunities for investing in the middle market: As IoT becomes more and more the purview of multibillion dollar players, where does one look for investment opportunities in younger, up-and-coming companies? A few of the areas discussed included IoT security, a significant issue that impacts adoption and sector growth, data management and analytics. Other areas are in the network where companies such as John Horn’s Ingenu are developing whole new platforms purpose built for carrying IoT data and free of the constraints imposed by cellular infrastructure. The distinction between vertically oriented solutions, of which there are many in existence in IoT, and more universal, horizontally applicable technologies was mentioned as an element impacting company valuations, in addition to other, commonly cited variables such as revenue model, business growth and profitability.
An IoT M&A update
Year to date, 2016 has been very active for M&As in the IoT sector, with a total of 80 transactions through September 30, equal to the number for the full year 2015. However, total disclosed value of transactions has increased 3.5 times to over $3.5 billion, reflecting an increase in average (disclosed) transaction size from $12.5 million to $44.8 million. Part of the jump is attributed to the acquisition by Cisco of Jasper Technologies announced in February 2016 — the sector’s first M&A deal valued at over $1 billion. Even without the Jasper transaction, total disclosed value has increased more than 2 times, driven by other large deals such as Alphabet’s acquisition of Apigee in September for $516 million and Honeywell’s acquisition of Xtralis in February for $480 million.
All three of these 2016 transactions fall in our “top 10” list of IoT M&A transactions going back to 2010. We believe this is reflective of the increased maturity of leading sector innovators and the growing aspirations among leading technology companies to be part of the future growth. Because of the high growth — and also because IoT touches on so many industries and itself encompasses so many applications and business models — we are hearing from a very wide range of corporate and financial buyers seeking to acquire in the sector.
While the majority of the work today in the internet of things is occurring in transportation, infrastructure and environmental sectors, many business consulting, marketing and consumer-focusing companies are looking at IoT as an important pillar to successfully leverage (and monetize) big data.
IoT drives interconnectivity of businesses, people and technologies across the internal enterprise and their external ecosystems. As a result, businesses leveraging IoT face new lines of operations and business intelligence, as well as unique ways of delivering services and products.
With all of the internet-connected devices, sensors and appliances establishing unique benefits, many enterprises implementing pilot IoT services and deliverables will face corporate governance issues related to the collection of personally identifiable data.
As IoT devices are often connected to cloud-based computing systems and third-party infrastructure, companies will need to reexamine and expand or adjust security policies and protocols for data protection. Another complicating factor in standardizing and operationalizing security practices for IoT are industry-specific regulations such as HIPAA or NIST 800-171 that necessitate the establishment and the continuous reinforcement of security controls.
IoT could change the security paradigm
The implications of privacy protection issues in the IoT arena have the potential to extend due diligence considerations for CEOs and board liabilities. The c-suite must determine and consider new accompanying liabilities, cybersecurity investments and the privacy implications of the information, data and analysis both in the United States as well as globally.
Due diligence suggests that the c-suite should review privacy protection capabilities and claims by partners and providers to minimize liability risk. Whether physical tools, marketing dashboards or streaming data analytics, corporate executives and IT leaders will need to consider which streams of information contain personally identifiable information (PII) or intellectual property, and beginning in December 2017, organizations must also consider the controlled, unclassified or controlled technical information that may be transiting their infrastructures.
The likely outcome of IoT enterprise integration will be the establishment of new information protection practices related to non-centralized computing at the “fog” and “cloud” locations. As a result, there will be a need for creative pre-engineered defenses, liability mitigation awareness and isolation techniques for meeting early-stage IoT business strategies. New aspects of reviewing and continuously improving training and awareness, risk assessments, auditing and accountability, and incident response communications need to become standard contractual requirements for IoT.
In most instances, these new practices will need to be designed into the system using virtual automated components to keep up with the flow of IoT, cloud and information. For many companies, integrating IoT will require further research and an understanding of how the interconnections between IoT conflate with cloud, business processes and PII.
In my previous post I discussed what we want to keep from the data gathered by all of those smart devices. We talked about raw data, averaging and thinking about possible use-cases for that data in the long run. Today, I would like to take that one step further.
Today we’re seeing huge growth not only in devices, but also in their diversity, and while these different devices all specialize in something specific, most are not connected to each other. We have devices for our health, smart house gear such as fridges, systems that keep the warmth in-house at a certain level, smart security systems and so much more. At first sight, you would agree that none of those should be connected, but as devices become smarter, we will see lots of links forming between these different devices.
I am not going to discuss here protocols that can be used or a possible framework that might appear when different vendors start working together; instead I want to discuss the importance of the data and how the data that you collect from one device can drive an action from another device.
The value of linking these disparate devices might not seem that obvious today, but if we use our imagination, we can already see some potential scenarios where different devices will use each other’s data to improve certain services.
Bear in mind that I do realize that in the following examples we are looking at serious potential privacy issues and that security should be top priority at all times. That said, I’m going to ignore these concerns for a second in the interest of exploring the potential of these interconnected, smart devices.
Gardening robots are becoming more and more popular. A couple of years ago they seemed like very expensive toys, but today they come in different sizes and price ranges, placing them within the means of a large number of people. But when is the best time to let your robot do the work? I’m certainly not a specialist in gardening (so I could personally benefit from such a service), but a smart weather system might direct your little robot to start mowing the lawn when it knows conditions are best (meaning using both current weather data and data from weather predicting services). Another example would be conserving water usage so that the sprinkling system does not start if rain is expected within X number of hours.
Remember the good old days when people were sniffing through promotions, cutting coupons and bringing indexed boxes of them to the store to buy their groceries? If you’re over 30, you probably do. Now, think about that intelligent fridge. Based on your eating and ordering pattern, it could use web-based services from retailers in your region to search for promotions and offer you the best price for your groceries. And speaking of groceries, I am one of those people who literally has no time to get groceries, and gets very frustrated when I arrive at the store because I know I will spend a lot more time there than I had anticipated. What if my intelligent calendar could use the retailer’s data to check what would be the best time for my calendar to go shopping and automatically put that in my agenda as an appointment? Or take it even a step further: What if my health tracker could be connected so that when I exercise the system will order me more calorie-rich food or fewer calories when I’m in a lower-exercise rhythm?
Today when I arrive at home from work, I turn up the heat (or at least, I do so in winter in the part of the world where I live). We even already have smart systems now that we can remotely start 15 minutes before we arrive at home so that our house is already at a comfortable temperature when I walk through the front door. But what if we take this one step further? What if we let the system decide based on the actual temperature, current energy costs and weather data? Maybe it should heat up quickly because energy costs will rise soon. Or perhaps it should wait to do anything, because a warm front is just now arriving. And what about smart windows or curtains? The thermostat could open them to let in more light to heat your home or open them to cool off when it is too warm.
These are just three small examples, and if you continue the thought stream, you could even combine them. Your heating system might “gather” additional warmth and light from the sun and deliver that to your garden where you grow some vegetables. And to continue the story, your refrigerator won’t buy vegetables when your garden tells you that there are plenty ready to be consumed. Oh, and by the way, considering that next week there is going to be a heat wave, you’d better stock up some fresh beverages, including sports drinks as your calendar tells me that you are planning to exercise heavily …
As LTE CatM1 and NB1 are progressively deployed by cellular operators — like Verizon’s recently announced joint initiative with Qualcomm leveraging Cat M1 LTE modems — some suggest that this implies the extinction of other low-power wide area network (LPWAN) technologies like LoRaWAN, a technology intended for connecting things wirelessly over ranges of up to 15km with battery life of up to 10 years in regional, national or global networks.
More and more major telecom operators, however, are deploying or planning to use the technologies in parallel, including Orange, Swisscom, Proximus, APT, KPN, Bouygues and others. As we move more into the age of the internet of things in industrial and consumer settings, this collaboration of both 3GPP technologies and LPWAN may be the right approach.
Obviously, none of these operators have taken the decision to deploy LoRaWAN lightly. They see 3GPP technologies — like LTE standards and others — and LoRaWAN technologies as complementary. While both the public and private sector adopt these technologies, here are a few things to consider about the benefits and uses cases of both, and about the inherent flaw in the system that needs to be remedied.
Benefits of LPWAN
Broadly speaking, LPWAN is defined as a market segment where lowest cost and lowest power consumption are the key selection criteria for the communication technology. For realistic LPWAN use cases, with a few tens of messages per day, the power consumption performance of LoRaWAN is five times better than CatNB1 (currently the 3GPP state of the art, requiring R13 networks), and the peak-current performance is an order of magnitude lower. Thanks to the characteristics of the ultra-low leak current batteries used in most devices with a 10 year plus battery life, this translates into an order of magnitude difference in battery size, and therefore total cost of ownership, in favor of LoRaWAN. In other words, if you want to power your things with little cost and little energy output, LoRaWAN is often the way to go.
Another important factor is that IoT is only marginally a market for national telco operators. Most IoT applications, on the other hand, are expected to be deployed in a “campus” scenario: airports, smart factories, smart cities, smart buildings, smart agriculture, etc. The use cases in this segment are expected to be dense deployments (thousands of devices in a relatively small area), which will be served from on-site dedicated LPWAN base stations. Such use cases are much easier to deploy and manage with unlicensed spectrum technologies, like LPWAN.
Benefits of 3GPP technologies
Of course this does not mean that all IoT applications will choose LPWAN or specifically LoRaWAN. In many cases, especially consumer, everyday cases, LPWAN isn’t the best technology for the job. Whenever a device needs to take a picture or to transmit hundreds of kilobytes of data, for instance, 3GPP technologies or Wi-Fi will be used, which could be used for things like wearables, including a runner’s Fitbit. 3GPP technologies can also help network providers like Verizon extend their business models to include not only better connectivity but compute capabilities that can bring them into new areas like retail point-of-sale and asset tracking.
Collaboration of both
In many cases, LPWAN and LTE Cat1 will collaborate. For example, an LPWAN campus base station serving thousands of smoke detectors may use LTE Cat 1 as a backhaul technology and LoRaWAN to connect to individual devices. A sensor package may send back an abnormal data reading over LoRaWAN, and be “woken” to capture a snippet of local video, which will then be streamed back over LTE Cat1.
OSS/BSS redesign needed
With all this said, connectivity capability is not the real, key issue; the OSS/BSS systems and ecosystem management are. How can operators or enterprise customers manage hundreds of millions of devices, tens of thousands of applications contributed by hundreds of companies and petabytes of data efficiently enough to create a viable ecosystem? How can the millions of things, speaking only occasionally and very quietly, coexist in a network that is also host to talkative humans, bent on maxing out their bandwidth with Netflix on the move, lifeblogging with Persicope, and Pokemon Go? That’s a question we’ll return to in a future article.
Communications service providers are adopting increasingly diverse strategies to address the IoT opportunity. Many are aggressively pursuing vertical applications, while others are actively positioning themselves as horizontal players, focused on the provision of connectivity. A number of major players are also putting in place capabilities to offer more of a global offering, while at the bottom end of the market smaller CSPs are accepting that their role will be little more than connectivity provider. These changing dynamics, driven by increasing maturity in approach to IoT, mean that we expect that over the next 2-3 years the market for CSPs in IoT will evolve dramatically.
The opportunity associated with IoT represents one of the bright spots for communications service providers who are seeing increasing competition and eroding margins; in many cases it has been identified as a key growth driver. One of our key roles over the last few years has been helping communications service providers navigate the opportunity in IoT. One key part of that support is our yearly IoT CSP report that compares the strategies of major CSPs in IoT, analyzing capabilities and describing best practices. Through that research, as well as a vast amount of other work around CSPs in IoT around the world, it is inevitable that we identify a number of interesting trends. In this article we highlight a few of those ways in which CSPs’ approach to IoT has been changing.
One of the factors that define CSP approaches to IoT is the extent to which they are focusing on developing and deploying applications for vertical sectors, or pursuing instead just horizontal capabilities, i.e., the provision of connectivity. The standard-bearer for the vertical approach is probably Verizon, which has placed a multibillion dollar bet on the fleet management space, acquiring first Hughes Telematics for $612 million in 2013, and Fleetmatics for $2.4 billion in August 2016. The reasoning is clear: the lion’s share of the value in IoT is in the provision of the end application, so Verizon is aiming to be the provider of end services in that specific sector. Other CSPs have pursued similar approaches, for instance Vodafone with its Cobra Automotive acquisition, and the likes of Orange in healthcare, Telefonica in smart cities and retail, and so forth.
Historically we would have seen the evolution of CSP approaches to IoT as being quite linear, i.e. moving from a wholesale approach based on selling SIMs, through setting up dedicated units, implementing IoT management platforms, to delving deeper into verticals to realize more of the value in IoT. However in the last 12 months we have seen some variation from that. Firstly we have seen some CSPs taking a more horizontal approach to the market. This is not necessarily a less sophisticated offering, more a case of focusing on offering more sophisticated horizontal capabilities, e.g., around supporting multiple access technologies or security. Tele2 is probably the best example of this type of CSP. Secondly, there is also an emerging trend of smaller operators, mostly operating in a single country accepting a secondary position in the support for IoT. Part of the stimulus for this change of attitude comes from the bigger CSPs and from value-added resellers who are offering to effectively provide an outsource partner for those smaller operators’ IoT offerings. One example of this kind of activity is Vodafone’s licensing of its GDSP platform to other operators, and its support for Partner market operators. Many of the other alliance activity, e.g., of the IoT World Alliance and the Global M2M Association, provides a community of global players that smaller operators can tap into to help with global offerings. Even value-added resellers are getting in on the act. The best example is Aeris Communications which has become increasingly focused on its line of business associated with supporting CSPs which may not themselves have the IoT-related expertise necessary to support the market opportunity.
Communications service providers have a critical role to play in IoT even if it is only in the provisioning of connectivity; and this is even more true with the deployment of extensive new IoT-friendly networks in the form of low power wide area networks. However there are many additional potential roles that they can play. These range from those that will basically outsource all elements of IoT bar the connectivity, all the way through to those that embrace the substantial opportunity associated with monetizing the data generated by IoT, a topic which we have not even touched on here. The key recommendation that Machina Research always makes is “use what you have,” i.e., there is no single correct strategy for every CSP. Each starts from a different position based on network assets, historical anomalies (for instance particularly in terms of dealing historically with healthcare related issues), geographical coverage, business structure (e.g., whether they have an IT services arm), sensitivity to risk and numerous other factors. And each must shape its strategy according to those characteristics.