IoT Agenda

Page 82 of 98« First...102030...8081828384...90...Last »

November 1, 2016  3:54 PM

IoT maintenance and upgrades

Vincent Perrier Profile: Vincent Perrier

Last week I participated as a panelist to a webcast on IoT maintenance and upgrades; I thought I’d write a post on the subject matter.

The panel was organized in three topics for each of the distinct phases of an IoT project: development, deployment and exploitation in the field of devices.

For the first topic, we look into how IoT devices are built. There are three important design aspects to consider when planning for field maintenance and upgrades of IoT devices:

  1. Design for maintenance, avoiding monolithic software architecture so developers limit impact of fixes in the whole software and ease maintenance with modular components. This implies selecting a component-driven software development environment and system runtime environment that support modular software construction and update.
  2. Enabling preventive maintenance, i.e., providing a software runtime environment that is more secure and robust. For instance, an app that crashes does not crash the entire system.
  3. Enabling corrective maintenance, i.e., selecting a runtime environment capable of firmware updates, but also flexible partial software (component-level) updates. This corresponds to the mobile OS paradigm where once in a while you do the OS update on your smartphone (and need to reboot), but install/update/uninstall apps independently much more often (no reboot). A secure runtime environment is needed to take care of ensuring trusted downloads, authenticating servers, verifying code, executing safely apps, etc.

For the second topic, we look into how to ease maintenance in the field. There are three points of view to take into consideration:

  1. End customers: Avoid taking the device to the repair shop or waiting for a technician to come.
  2. Operations/fleet management: Being able to do remote diagnosis and corrective maintenance, knowing that full firmware updates can be very problematic with IoT due to latency and low bandwidth at the edge.
  3. Manufacturers: Being able to do fixes easily while the development team is probably busy working on another project.

Corrective maintenance involves putting in place tools for remote monitoring, operations management, logging, downloading or uploading software and utilities. With thousands or millions of IoT devices being deployed, operators need to connect them to a web-based platform in order to manage them. Standards like MQTT, CoAP and LWM2M are emerging for connecting and managing a fleet of IoT devices.

For the third topic, we look into how devices are managed. A complete infrastructure in the cloud or with enterprise servers is required for doing three things:

  1. Data intelligence: Collecting data from devices, storing and presenting it, and doing big data analytics.
  2. Device management: Provisioning, monitoring, user account management, device administration, over-the-air updates, customer support.
  3. Software content management: Whether manufacturers/operators want to do full or partial software updates.

Connecting IoT devices to cloud services isn’t easy as operators need to put in place the proper device runtime environment, protocol/network/security strategy, and services in the cloud.

With IoT being new, it’s likely that device manufacturers and service providers are looking for the proper service/killer app. That’s why it’s important to give them the flexibility to ship devices quickly and adjust content/configuration later depending on markets, regions, options subscribed by customer, etc. Manufacturers need the capability for updating the software content (functionality/apps) on the fly upon any kind of evolution — market-driven or technical. This requires a flexible software environment on the device side that adjusts to a fast moving IoT landscape.

View a replay of the panel discussion here.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 31, 2016  2:35 PM

Fog computing and the OpenFog Consortium

Tao Zhang Profile: Tao Zhang
Cloud Computing, FOG, fog computing, Internet of Things, iot

For the last 15 years, cloud computing has been a game changer. It has created efficiencies and increased scalability, giving rise to the “as a service” phenomenon in the enterprise. Cloud computing has certainly become a standard in many IT environments, but as we move into a significantly more connected world where we want to support many new “things” and applications, we are starting to see a greater need for computing capabilities closer to the users and the things. This need is driving the next wave of innovations — something Cisco coined “fog computing.”

Fog computing is a system-level horizontal architecture that distributes resources and services of compute, storage, control and networking anywhere along the continuum from the cloud to the things.

This new capability will fill the technology gaps to meet the new requirements of the emerging IoT applications, many which may not have been possible with a cloud-only environment. This can then help a broader range of industries and consumer sectors increase their abilities to support IoT and other emerging applications, which can include anything from existing and future performance-critical, mission-critical and life-critical applications.

Fog computing success stories

Across vertical markets including transportation, utilities, smart cities, manufacturing, retail, energy, healthcare, agriculture, government and even the consumer space, fog computing has demonstrated a tremendous business value already.

The manufacturing industry is full of prime examples of the power of fog. For example, Lordan, a global thermal-engineering heating and cooling manufacturer, used fog for its manufacturing automation system. Its implementation gave the company the ability to view overall throughput and track mission-critical manufacturing information in real time directly from the production floor, rather than rely on periodical assessments. This resulted in over 600 labor hours saved a month, with direct cost savings just weeks after deployment.

Mazak Corporation, which builds advanced technology solutions, has collaborated on its SmartBox technology. Mazak was looking to provide customers with an advanced, secure data collection system which would run on the network infrastructure of a customer’s factory floor. To do so, the fog application needed to support advanced security standards and real-time analytics. Enter the SmartBox, which utilizes fog computing to enable real-time manufacturing data and analytics from Mazak machines to significantly improve machine efficiency for Mazak’s manufacturing customers.

So how does fog computing work?

Fog computing will provide a standards-based way to distribute compute, storage and application resources, and services closer to the users along the continuum from the cloud to the things. This will be analogous to how TCP/IP provides a universal standard way to distribute packets across the internet. Additionally, fog will provide standards-based ways to manage the lifecycles of the distributed resources and services, to secure systems and applications, to pool together the resources in different fog systems, and, in the cloud, to support applications, to provide APIs for the developer community to create new fog applications, and for fog operators to deploy the applications.

To do so, fog needs to operate on an open architecture with interoperable standards. Since the same customer often needs services provided by both the cloud and the fog, fog should be, in many scenarios, integrated with the cloud to enable a unified end-to-end service platform to provide seamless services to the customer. Some platforms can be used to manage services in both the cloud and the fog. Applications developed for the cloud should be able to work in the fog without modification, and vice versa.

A fog system also needs to be able to communicate with all sorts of endpoints. The fog system can serve as proxies of these endpoints to help connect them to the cloud and perform local processing of the data from the endpoints. The fog system can also serve as the proxy of the cloud to provide services to the endpoints. The reality is that no one company can offer a full fog solution. Fog must be supported by a large ecosystem of innovative companies.

In November 2015, innovators including Cisco, Dell, Intel, Microsoft, ARM and Princeton University launched the OpenFog Consortium to develop an open reference architecture. Another key goal of this consortium is to help the industry learn about the business value of fog computing, and therefore help accelerate market adoption. Since then, the consortium has grown to over 50 members, including not only industry leaders, but also startup technology innovators and research organizations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2016  2:19 PM

Building smart cities with IoT ecosystems for the future

Dean Hamilton Profile: Dean Hamilton
Data Management, Internet of Things, iot, smart city, Traffic management

There’s a lot of talk about smart cities created with IoT technologies as the future of urban living. Transportation, energy, healthcare, water and waste are some of the government sectors looking at integrating information and technology to make the lives of residents better. Yet, while each sensor and IoT application contributes value in its own way, they’re only individual pieces of the vision of what a smart city can bring.

Most analysts studying IoT and connected cities predict healthy growth. Gartner predicted that smart cities will use 1.6 billion connected “things” in 2016 alone, and IoT deployment in commercial buildings will reach more than 1 billion in 2018.

However, a really smart city goes beyond connected street lights, trash cans and booking a parking space with a smart swipe.

When all of these connected apps and their associated data exist in silos, the full value of the potential insight from this data sits, untouched. City planners and CIOs would do well to think of the smart city itself as an infrastructure service platform for building highly interconnected applications. Instead of an application for the water department and another application for the traffic department, applications interact, leverage insights derived from the intersection of the data generated in each vertical, and build on each other’s value. That’s when a loosely connected series of smart vertical applications actually build a smart city.

Let’s talk about how a connected city can get smarter and smarter.

The city as an operating system

In the smart cities of the future, the city will provide the primary repository for collected data — therefore it’s the city that will provide a platform on which applications can be created that enhance the life of residents.

What’s the advantage? By creating the IoT application creation platform itself and exposing collected data for innovators to build on, the city monetizes its smart city investments by charging for access to valuable data. At the same time, the city is providing a critical technology enabler that will stimulate innovation and enrich the lives of residents. And what city doesn’t want to increase its coffers? Too often smart city initiatives drain city resources instead of replenishing them. To me, a city that makes money as it gets smarter is a truly smart city.

Use case: Traffic management delivers more than tickets

Here’s a use case: Traffic management and enforcement data becomes the foundation of an ecosystem of smart city applications to manage congestion, air quality and promote local commerce.

Data from one application can be use in a number of different areas in smart cities.

Data from one application can be use in a number of different areas in smart cities.

Traffic enforcement camera data. Many cities have adopted traffic enforcement cameras with the idea of making potentially dangerous intersections safer. All-seeing automated cameras take photos of vehicles entering an intersection on a red light and then a ticket for the vehicle’s registered owner shows up in the mail. It’s proven to be a successful approach to monitoring high-traffic intersections, reducing injuries and improving traffic flow.

However, a city with that traffic data already coming in could derive much more value by building an ecosystem, based on that information. For example:

  • Congestion charges
    A fee for driving a vehicle during highly congested periods is levied in cities such as London and Singapore. You’re not running a red light, but if you do drive in a heavily congested zone of the city during peak traffic times and your car is picked up by the cameras, then you pay a congestion tax. In London, these funds help support the city’s transport system.
  • Promote public transportation and directed parking
    Park in the preferred parking garage (instead of on the crowded street), and a smart parking app automatically gives a discount on congestion charges. Residents can also receive incentives if they take available public transportation when traffic is anticipated to be heavy (a game or a concert is scheduled), including reduced fees and discounts on attractions.
  • Shop local discounts
    The city wants to promote commerce while still managing congestion. Residents receive discounts on congestion charges in targeted commerce zones — whether it’s parking or dining or shopping — if they choose to drive. What a great way to support local merchants! In a store at the point-of-sale system, a smart retail app can connect to the transportation data and provide discounts if sufficient purchases are made.
  • Safety and security
    In an emergency, the best route to safety can be delivered to those within an area. The designated route can take into account real-time traffic patterns to avoid creating more gridlock.
  • Air quality
    Real-time sensor data can warn citizens that are affected by allergens and irritants (according to HIPAA requirements) that air quality in a specified area is at a level that can trigger asthma attacks and congestion discounts can be provided if a resident chooses to drive their low-emission vehicle.

These are just a few examples of ways to monetize existing data through an ecosystem of IoT applications.

One of the fundamental challenges of the internet of things is that value can be obtained from data only if you can change the culture and processes of working with data to derive full benefit. Cities and companies may collect massive volumes of information, but that doesn’t mean they benefit from it. The most exiting innovation can only begin when those silos are broken down.

What makes a smart city? Smart cities use data available from connected devices to benefit their citizens. An IoT application (such as traffic enforcement) should be viewed as merely the starting point. Enrichment and monetization of the data is where they journey gets really interesting. An IoT service creation and enrichment platform is a smart way to build — and monetize — a rich IoT application ecosystem that leverages the massive amounts of data collected by the smart city. The result? Better services and quality of life for its citizens.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 28, 2016  11:43 AM

IoT roundtable: Hype vs. reality, M&A update and more

James Turino Profile: James Turino
Internet of Things, iot, M&A, Mergers & Acquisitions

We participated in an IoT roundtable discussion last month at the ACG M&A East conference in Philadelphia.  The roundtable was organized by Michael Morrissey of Philadelphia-based private equity firm Inverness Graham, which was the primary institutional investor in Raco Wireless leading up to its acquisition by Kore Telematics in late 2014.  Other participants in the roundtable were John Horn, previous president of Raco Wireless and currently CEO of Ingenu, and Keith Schneider, who most recently served as group president of Verizon Telematics and as CEO of Network Fleet before its acquisition by Verizon.

The discussion covered a range of topics, including:

IoT market sizing (hype vs. reality): The point was made that new use cases are continually being developed in the IoT sector, and much of the growth may come from new applications that we can’t even foresee today.  Our own published estimates are for 50 billion connected devices by 2020, reflecting a compound growth rate of 22% annually.  We also estimate that $6 trillion will be invested in the next five years, more than total investment in 4G and 5G combined.

How IoT is accelerating the development of the sharing economy: IoT enables a shift from the classic concept of ownership to a rented or shared model.  The traditional “sale” event as a one-time transfer of title may become less clear.  And information about product usage flowing back to the manufacturer or distributor enables them to take a more active role in the aftermarket cycle.

Opportunities for investing in the middle market: As IoT becomes more and more the purview of multibillion dollar players, where does one look for investment opportunities in younger, up-and-coming companies? A few of the areas discussed included IoT security, a significant issue that impacts adoption and sector growth, data management and analytics.  Other areas are in the network where companies such as John Horn’s Ingenu are developing whole new platforms purpose built for carrying IoT data and free of the constraints imposed by cellular infrastructure.  The distinction between vertically oriented solutions, of which there are many in existence in IoT, and more universal, horizontally applicable technologies was mentioned as an element impacting company valuations, in addition to other, commonly cited variables such as revenue model, business growth and profitability.

An IoT M&A update

Year to date, 2016 has been very active for M&As in the IoT sector, with a total of 80 transactions through September 30, equal to the number for the full year 2015.  However, total disclosed value of transactions has increased 3.5 times to over $3.5 billion, reflecting an increase in average (disclosed) transaction size from $12.5 million to $44.8 million.  Part of the jump is attributed to the acquisition by Cisco of Jasper Technologies announced in February 2016 — the sector’s first M&A deal valued at over $1 billion. Even without the Jasper transaction, total disclosed value has increased more than 2 times, driven by other large deals such as Alphabet’s acquisition of Apigee in September for $516 million and Honeywell’s acquisition of Xtralis in February for $480 million.

IoT M&A activity

All three of these 2016 transactions fall in our “top 10” list of IoT M&A transactions going back to 2010.  We believe this is reflective of the increased maturity of leading sector innovators and the growing aspirations among leading technology companies to be part of the future growth.   Because of the high growth — and also because IoT touches on so many industries and itself encompasses so many applications and business models — we are hearing from a very wide range of corporate and financial buyers seeking to acquire in the sector.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 28, 2016  10:27 AM

Minimizing IoT security liabilities

Maria Horton Profile: Maria Horton
Data privacy, Internet of Things, iot, iot security, privacy

While the majority of the work today in the internet of things is occurring in transportation, infrastructure and environmental sectors, many business consulting, marketing and consumer-focusing companies are looking at IoT as an important pillar to successfully leverage (and monetize) big data.

IoT drives interconnectivity of businesses, people and technologies across the internal enterprise and their external ecosystems. As a result, businesses leveraging IoT face new lines of operations and business intelligence, as well as unique ways of delivering services and products.

With all of the internet-connected devices, sensors and appliances establishing unique benefits, many enterprises implementing pilot IoT services and deliverables will face corporate governance issues related to the collection of personally identifiable data.

As IoT devices are often connected to cloud-based computing systems and third-party infrastructure, companies will need to reexamine and expand or adjust security policies and protocols for data protection. Another complicating factor in standardizing and operationalizing security practices for IoT are industry-specific regulations such as HIPAA or NIST 800-171 that necessitate the establishment and the continuous reinforcement of security controls.

IoT could change the security paradigm

The implications of privacy protection issues in the IoT arena have the potential to extend due diligence considerations for CEOs and board liabilities. The c-suite must determine and consider new accompanying liabilities, cybersecurity investments and the privacy implications of the information, data and analysis both in the United States as well as globally.

Due diligence suggests that the c-suite should review privacy protection capabilities and claims by partners and providers to minimize liability risk. Whether physical tools, marketing dashboards or streaming data analytics, corporate executives and IT leaders will need to consider which streams of information contain personally identifiable information (PII) or intellectual property, and beginning in December 2017, organizations must also consider the controlled, unclassified or controlled technical information that may be transiting their infrastructures.

The likely outcome of IoT enterprise integration will be the establishment of new information protection practices related to non-centralized computing at the “fog” and “cloud” locations. As a result, there will be a need for creative pre-engineered defenses, liability mitigation awareness and isolation techniques for meeting early-stage IoT business strategies. New aspects of reviewing and continuously improving training and awareness, risk assessments, auditing and accountability, and incident response communications need to become standard contractual requirements for IoT.

In most instances, these new practices will need to be designed into the system using virtual automated components to keep up with the flow of IoT, cloud and information. For many companies, integrating IoT will require further research and an understanding of how the interconnections between IoT conflate with cloud, business processes and PII.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 27, 2016  11:56 AM

IoT, the connected mind state

Mike Resseler Profile: Mike Resseler
Connectivity, consumer, iot, IoT applications, wearable

In my previous post I discussed what we want to keep from the data gathered by all of those smart devices. We talked about raw data, averaging and thinking about possible use-cases for that data in the long run. Today, I would like to take that one step further.

Today we’re seeing huge growth not only in devices, but also in their diversity, and while these different devices all specialize in something specific, most are not connected to each other. We have devices for our health, smart house gear such as fridges, systems that keep the warmth in-house at a certain level, smart security systems and so much more. At first sight, you would agree that none of those should be connected, but as devices become smarter, we will see lots of links forming between these different devices.

I am not going to discuss here protocols that can be used or a possible framework that might appear when different vendors start working together; instead I want to discuss the importance of the data and how the data that you collect from one device can drive an action from another device.

The value of linking these disparate devices might not seem that obvious today, but if we use our imagination, we can already see some potential scenarios where different devices will use each other’s data to improve certain services.

Bear in mind that I do realize that in the following examples we are looking at serious potential privacy issues and that security should be top priority at all times. That said, I’m going to ignore these concerns for a second in the interest of exploring the potential of these interconnected, smart devices.

Gardening

Gardening robots are becoming more and more popular. A couple of years ago they seemed like very expensive toys, but today they come in different sizes and price ranges, placing them within the means of a large number of people. But when is the best time to let your robot do the work? I’m certainly not a specialist in gardening (so I could personally benefit from such a service), but a smart weather system might direct your little robot to start mowing the lawn when it knows conditions are best (meaning using both current weather data and data from weather predicting services). Another example would be conserving water usage so that the sprinkling system does not start if rain is expected within X number of hours.

Targeted marketing

Remember the good old days when people were sniffing through promotions, cutting coupons and bringing indexed boxes of them to the store to buy their groceries? If you’re over 30, you probably do. Now, think about that intelligent fridge. Based on your eating and ordering pattern, it could use web-based services from retailers in your region to search for promotions and offer you the best price for your groceries. And speaking of groceries, I am one of those people who literally has no time to get groceries, and gets very frustrated when I arrive at the store because I know I will spend a lot more time there than I had anticipated. What if my intelligent calendar could use the retailer’s data to check what would be the best time for my calendar to go shopping and automatically put that in my agenda as an appointment? Or take it even a step further: What if my health tracker could be connected so that when I exercise the system will order me more calorie-rich food or fewer calories when I’m in a lower-exercise rhythm?

Heating system

Today when I arrive at home from work, I turn up the heat (or at least, I do so in winter in the part of the world where I live). We even already have smart systems now that we can remotely start 15 minutes before we arrive at home so that our house is already at a comfortable temperature when I walk through the front door. But what if we take this one step further? What if we let the system decide based on the actual temperature, current energy costs and weather data? Maybe it should heat up quickly because energy costs will rise soon. Or perhaps it should wait to do anything, because a warm front is just now arriving. And what about smart windows or curtains? The thermostat could open them to let in more light to heat your home or open them to cool off when it is too warm.

Conclusion

These are just three small examples, and if you continue the thought stream, you could even combine them. Your heating system might “gather” additional warmth and light from the sun and deliver that to your garden where you grow some vegetables. And to continue the story, your refrigerator won’t buy vegetables when your garden tells you that there are plenty ready to be consumed. Oh, and by the way, considering that next week there is going to be a heat wave, you’d better stock up some fresh beverages, including sports drinks as your calendar tells me that you are planning to exercise heavily …

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 27, 2016  11:15 AM

LPWAN technology: Can CatM1, CatNB1 and LoRaWAN play nice together?

Olivier Hersent Profile: Olivier Hersent
3GPP, Internet of Things, iot, LoRA, LoraWAN, LPWAN, LTE, Wireless, Wireless communications, Wireless networking

As LTE CatM1 and NB1 are progressively deployed by cellular operators — like Verizon’s recently announced joint initiative with Qualcomm leveraging Cat M1 LTE modems — some suggest that this implies the extinction of other low-power wide area network (LPWAN) technologies like LoRaWAN, a technology intended for connecting things wirelessly over ranges of up to 15km with battery life of up to 10 years in regional, national or global networks.

More and more major telecom operators, however, are deploying or planning to use the technologies in parallel, including Orange, Swisscom, Proximus, APT, KPN, Bouygues and others. As we move more into the age of the internet of things in industrial and consumer settings, this collaboration of both 3GPP technologies and LPWAN may be the right approach.

Obviously, none of these operators have taken the decision to deploy LoRaWAN lightly. They see 3GPP technologies — like LTE standards and others — and LoRaWAN technologies as complementary. While both the public and private sector adopt these technologies, here are a few things to consider about the benefits and uses cases of both, and about the inherent flaw in the system that needs to be remedied.

Benefits of LPWAN

Broadly speaking, LPWAN is defined as a market segment where lowest cost and lowest power consumption are the key selection criteria for the communication technology. For realistic LPWAN use cases, with a few tens of messages per day, the power consumption performance of LoRaWAN is five times better than CatNB1 (currently the 3GPP state of the art, requiring R13 networks), and the peak-current performance is an order of magnitude lower. Thanks to the characteristics of the ultra-low leak current batteries used in most devices with a 10 year plus battery life, this translates into an order of magnitude difference in battery size, and therefore total cost of ownership, in favor of LoRaWAN. In other words, if you want to power your things with little cost and little energy output, LoRaWAN is often the way to go.

Another important factor is that IoT is only marginally a market for national telco operators. Most IoT applications, on the other hand, are expected to be deployed in a “campus” scenario: airports, smart factories, smart cities, smart buildings, smart agriculture, etc. The use cases in this segment are expected to be dense deployments (thousands of devices in a relatively small area), which will be served from on-site dedicated LPWAN base stations. Such use cases are much easier to deploy and manage with unlicensed spectrum technologies, like LPWAN.

Benefits of 3GPP technologies

Of course this does not mean that all IoT applications will choose LPWAN or specifically LoRaWAN. In many cases, especially consumer, everyday cases, LPWAN isn’t the best technology for the job. Whenever a device needs to take a picture or to transmit hundreds of kilobytes of data, for instance, 3GPP technologies or Wi-Fi will be used, which could be used for things like wearables, including a runner’s Fitbit. 3GPP technologies can also help network providers like Verizon extend their business models to include not only better connectivity but compute capabilities that can bring them into new areas like retail point-of-sale and asset tracking.

Collaboration of both

In many cases, LPWAN and LTE Cat1 will collaborate. For example, an LPWAN campus base station serving thousands of smoke detectors may use LTE Cat 1 as a backhaul technology and LoRaWAN to connect to individual devices. A sensor package may send back an abnormal data reading over LoRaWAN, and be “woken” to capture a snippet of local video, which will then be streamed back over LTE Cat1.

OSS/BSS redesign needed

With all this said, connectivity capability is not the real, key issue; the OSS/BSS systems and ecosystem management are. How can operators or enterprise customers manage hundreds of millions of devices, tens of thousands of applications contributed by hundreds of companies and petabytes of data efficiently enough to create a viable ecosystem? How can the millions of things, speaking only occasionally and very quietly, coexist in a network that is also host to talkative humans, bent on maxing out their bandwidth with Netflix on the move, lifeblogging with Persicope, and Pokemon Go? That’s a question we’ll return to in a future article.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 26, 2016  10:28 AM

Communications service providers approach to IoT is evolving

Matt Hatton Profile: Matt Hatton
Communications services, Internet of Things, iot, service provider, Service providers

Communications service providers are adopting increasingly diverse strategies to address the IoT opportunity. Many are aggressively pursuing vertical applications, while others are actively positioning themselves as horizontal players, focused on the provision of connectivity. A number of major players are also putting in place capabilities to offer more of a global offering, while at the bottom end of the market smaller CSPs are accepting that their role will be little more than connectivity provider. These changing dynamics, driven by increasing maturity in approach to IoT, mean that we expect that over the next 2-3 years the market for CSPs in IoT will evolve dramatically.

The opportunity associated with IoT represents one of the bright spots for communications service providers who are seeing increasing competition and eroding margins; in many cases it has been identified as a key growth driver. One of our key roles over the last few years has been helping communications service providers navigate the opportunity in IoT. One key part of that support is our yearly IoT CSP report that compares the strategies of major CSPs in IoT, analyzing capabilities and describing best practices. Through that research, as well as a vast amount of other work around CSPs in IoT around the world, it is inevitable that we identify a number of interesting trends. In this article we highlight a few of those ways in which CSPs’ approach to IoT has been changing.

One of the factors that define CSP approaches to IoT is the extent to which they are focusing on developing and deploying applications for vertical sectors, or pursuing instead just horizontal capabilities, i.e., the provision of connectivity. The standard-bearer for the vertical approach is probably Verizon, which has placed a multibillion dollar bet on the fleet management space, acquiring first Hughes Telematics for $612 million in 2013, and Fleetmatics for $2.4 billion in August 2016. The reasoning is clear: the lion’s share of the value in IoT is in the provision of the end application, so Verizon is aiming to be the provider of end services in that specific sector. Other CSPs have pursued similar approaches, for instance Vodafone with its Cobra Automotive acquisition, and the likes of Orange in healthcare, Telefonica in smart cities and retail, and so forth.

Historically we would have seen the evolution of CSP approaches to IoT as being quite linear, i.e. moving from a wholesale approach based on selling SIMs, through setting up dedicated units, implementing IoT management platforms, to delving deeper into verticals to realize more of the value in IoT. However in the last 12 months we have seen some variation from that. Firstly we have seen some CSPs taking a more horizontal approach to the market. This is not necessarily a less sophisticated offering, more a case of focusing on offering more sophisticated horizontal capabilities, e.g., around supporting multiple access technologies or security. Tele2 is probably the best example of this type of CSP. Secondly, there is also an emerging trend of smaller operators, mostly operating in a single country accepting a secondary position in the support for IoT. Part of the stimulus for this change of attitude comes from the bigger CSPs and from value-added resellers who are offering to effectively provide an outsource partner for those smaller operators’ IoT offerings. One example of this kind of activity is Vodafone’s licensing of its GDSP platform to other operators, and its support for Partner market operators. Many of the other alliance activity, e.g., of the IoT World Alliance and the Global M2M Association, provides a community of global players that smaller operators can tap into to help with global offerings. Even value-added resellers are getting in on the act. The best example is Aeris Communications which has become increasingly focused on its line of business associated with supporting CSPs which may not themselves have the IoT-related expertise necessary to support the market opportunity.

Communications service providers have a critical role to play in IoT even if it is only in the provisioning of connectivity; and this is even more true with the deployment of extensive new IoT-friendly networks in the form of low power wide area networks. However there are many additional potential roles that they can play. These range from those that will basically outsource all elements of IoT bar the connectivity, all the way through to those that embrace the substantial opportunity associated with monetizing the data generated by IoT, a topic which we have not even touched on here. The key recommendation that Machina Research always makes is “use what you have,” i.e., there is no single correct strategy for every CSP. Each starts from a different position based on network assets, historical anomalies (for instance particularly in terms of dealing historically with healthcare related issues), geographical coverage, business structure (e.g., whether they have an IT services arm), sensitivity to risk and numerous other factors. And each must shape its strategy according to those characteristics.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 26, 2016  10:08 AM

IoT and the case for intelligence at the edge

Tom Bradicich Tom Bradicich Profile: Tom Bradicich
Edge computing, IIoT, Internet of Things, iot, Sensor data

There are six billion things connected to networks and sharing data right now. An extraordinary number for sure, and yet we’re still at the beginning of what promises to be a long-term, geometric expansion in volumes of connected devices. Gartner estimated that within four years more than half of major new business processes and systems will include “some element” of the internet of things, which means more connected things.

There’s good reason to believe the hype. Sensor data pent up in IoT can be unleashed to make homes, cars, businesses, utilities and even cities smarter. Our challenge is to develop systems that capture and process data fast, efficiently and at the point where it can have maximum impact.

The case for adding intelligence at the edge

Much of today’s sensor networks are positioned near or have fast access to clouds. No one should be surprised by that; much of IoT, especially consumer IoT, has formed up in areas where it’s convenient to deploy. But as it grows, we should expect companies to bring sensors and data acquisition systems to things in remote areas, resulting in use cases that the cloud would be ill-suited to support.

There seems to be a frenzy today around “sensor to cloud, sensor to cloud.” And there’s good reason for this, as the cloud has proven to be a fine progression from traditional data center IT. So, why not impute the benefits of the cloud on the processing of sensor data from the IoT. Well, we should, have and will. But there are just seven problems:

  1. Latency is too important. A smart car that uses sensors to detect items on the road ahead needs to process information instantly. The cloud can’t provide that, and shouldn’t be expected to.
  2. Bandwidth is too sparse. Rural areas are generally connected via expensive satellite systems that don’t have much bandwidth to begin with. Sending information to the cloud this way can take far too long or be impossible to yield useful results.
  3. Compliance requirements would complicate data sharing. Governments and corporations have put restrictions on how far data can travel. Thus, collecting data in London and shipping it to Paris for analysis isn’t an option anymore.
  4. Transmitting data would create security problems. Attackers target data that is in motion because it’s usually tougher to secure information that’s in transit. What’s more, data that’s being sent for analysis is also usually worth encrypting, which can make files fatter and transmission materially slower.
  5. Pushing data to the cloud costs. Bandwidth isn’t free and sometimes it isn’t cheap.
  6. Cloud access duplicates data collection efforts. There’ll be some, not all of course, duplication of software and hardware if both the edge and the cloud are equipped for massive IoT data.
  7. Distance creates data corruption that pollutes analysis. Have you ever participated in a cross-border phone call? Between the static and the dropped words you get a sense of what you’re hearing on the other end, but the connection is still anything but clear. Too much data is being lost during transit. Cloud connections can suffer from this same problem.

In summary, we can’t assume that pervasive connectivity will always allow the proper sharing, combining and processing data. Pervasive connectivity may not be available or suitable on an oil platform floating in the Gulf of Mexico, a secure manufacturing floor or in the wide-open fields of an Iowa farm. And, likely not performant enough for the astoundingly massive data to come from future autonomous vehicles.

Thus, instead of having to transmit data to faraway servers, in many cases it’s more practical to install portable, rugged, data center-grade computing systems onsite. Processing data at the time and place it’s collected means information becomes insight faster, leading to intelligent action that can save resources or even lives.

Future of IoT: From edge-to-cloud

IDC predicted that as much as 45% of IoT-created data will be stored, processed, analyzed and acted upon near or at the network edge by 2019. I think it’ll be more, I believe the data from IoT will be faster and bigger than all other types of big data combined.

Enterprises seeking to seize opportunities created by IoT will have to employ intelligence at many stages of the end-to-end IoT solution — from the thing, to the edge, to the data center and cloud, and lots of places in between.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 25, 2016  11:54 AM

How to successfully protect your organization from IoT cyberattacks

Mike Milner Profile: Mike Milner
Application security, Cyberattacks, Internet of Things, iot, iot security, RASP

With an estimated 6.4 billion internet-connected devices currently in use, 2016 has certainly been the year of the internet of things. This movement has brought increased functionality, data and insights to everything we do, not only providing us more information about our day-to-day lives, but also improving and streamlining processes in industrial and commercial spaces.

But with this increase in IoT-connected devices comes an increased risk of cyberattacks. As many of these devices have never been connected to the internet before, it’s easy to forget that they are vulnerable to hacking. It’s also easy to forget that with this volume and diversity of devices connecting to IoT comes a myriad of applications needed to support them. As a result, development teams are under more pressure than ever to deliver applications as quickly as possible.

To meet this demand, development teams have begun shifting from the traditional “core IT” practices — which had time built in for testing and patching vulnerabilities in code — to practices that allow developers to constantly release and rerelease applications as they are developed and updated — leaving no time to test or monitor for potential flaws in security. To combat this, organizations must make security an organization-wide commitment, implementing tools and training employees on the best ways to protect themselves and their devices.

Implement a runtime application self-protection (RASP) program
For years, application protection has been handled by developers who, in addition to writing code, were responsible for testing, monitoring for and patching any vulnerabilities that they found. But as developers take more on their plates and hackers become more advanced, a deeper, more involved level of protection — that also eases the workloads of development teams — is necessary. Application-level security tools provide just that. Embedded within an application and running constantly, these tools monitor for, recognize and block attacks in real-time, ensuring an application’s safety with little to no intervention from development teams.

Be mindful of when new applications are connecting to a network and what their capabilities are
With new products being added to IoT every day, it can be easy for an organization to lose track of all of the devices connected to servers and networks. However, it is important to remember that every connection — from building management systems and office equipment that tracks activity within your space to employees’ personal and professional devices — has the potential to be an entryway for hackers. And in the case of IoT-connected devices, not only does this put potentially sensitive information at risk, but it can also have real-world effects if hackers are able to gain access to the right device. To combat this, organizations need to actively be aware of and monitor all connected devices and train employees on the dangers of hacks. Making security an organization-wide initiative will benefit both the business and its employees.

Understand when it is and isn’t necessary for a product to be connected to IoT
As with any exciting technological movement, it’s easy to get caught up in the latest and greatest. While there are many IoT-connected devices that add value in a workplace, such as intelligent systems that can monitor and adjust temperature and light levels depending on occupancy, or keycard-operated locks that help to keep non-employees from entering a workspace, there are many products that simply do not. When determining whether or not to purchase an IoT-connected device, organizations must weigh the risk to the business if the device is compromised against the benefit of having internet access. Being selective in this process will not only help to reduce risk, but also protect the organization from a potentially unnecessary and costly investment.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 82 of 98« First...102030...8081828384...90...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: