IoT Agenda

March 14, 2018  11:18 AM

Maintaining the focus on IoT cybersecurity in 2018

Richard Blech Profile: Richard Blech
Authentication, Botnet, cybersecurity, Hackers, Hacking, Internet of Things, iot, IoT devices, iot security, patching, Ransomware

Despite the attention placed on improving IoT device security, it still remains a weak link. Most of these devices do not have basic security capabilities, and when they do there’s often a configuration problem that renders it vulnerable. The sheer number of IoT devices provides many opportunities for hackers to commit security breaches.

There are now more devices than there are people on the planet, and this number is expected to reach more than 20 billion by 2020 according to Gartner. The wide range of IoT applications run from sensors placed in cornfields to connected cars and even connected sports equipment. This breadth of usage provides hackers with many opportunities to poke around, find security holes and then control the devices and/or steal important data.

To prevent IoT-related threats, firms engaging with such devices should squarely focus on security for the remainder of 2018.

Improving ransomware protections

The WannaCry attack that started in May 2017 is an example of how quickly ransomware can spread. It took advantage of users that did not deploy a Microsoft patch, and was able to spread to more than 230,000 computers and cause hundreds of millions (if not billions) of dollars in damages.

The ransomware threat for IoT devices is growing, as hackers are met with improved security of PCs and networks and they want to find alternative “easy marks.” A group of white hat hackers showed two years ago how they took over a smart thermostat from hundreds of miles away as a demonstration of how hackers could hold such devices ransom. The implications for such attacks are staggering, with the potential for hackers to commandeer IoT-connected machinery, connected cars and other systems. For example, Johnson & Johnson warned in 2016 that one of its insulin pumps was susceptible to hackers, who could conceivably control the pumps and deliver an unauthorized injection.

The aim for many of these hacking attempts will be to control (and ransom) the actual devices, and many firms will be tempted to pay because their businesses rely so heavily on the uninterrupted performance of those very devices. Consider an industrial setting where locking all IoT devices could mean interruption to the power grid, or cessation of all work on a production line. And even if the hacker’s end game is not to control the thermostat, these IoT devices are still all connected to the home Wi-Fi and act as an easy entry point to the network.

Making the case for encryption

Security professionals should also implement encryption protocols for data when it moves between IoT devices, while it’s static, and transitions to back-end systems. Using cryptographic algorithms for IoT data helps firms ensure data integrity and mitigates risk as a target for hackers. The industry challenge for 2018 is how companies will develop encryption protocols and processes that work best for the massive range of IoT devices. For sectors such as healthcare that are being transformed by IoT, encryption is mandatory. Such devices are transmitting very personal and identifiable data about patients, and in some cases the data is exposed in transit.

Addressing updates

IoT devices are susceptible to botnet attacks, such as Mirai and JenX, which target routers, digital cameras and other devices that are connected to the internet. These botnets pull together bandwidth, which can then be used for distributed denial-of-service and other forms of attacks.

There are several recommended security improvements for IoT, including the need for a system of regular software updates. Unlike PCs or networks, many IoT devices do not receive any updates, so they’re left in the same security state as they were when they left the manufacturer. The problem is that hackers are always trying to exploit devices through new methods and programs. Without updates, the devices themselves are vulnerable because they aren’t programmed to deflect the very latest hacking attempts. Device manufacturers should ensure their IoT components are set up with regular updating (and users must perform the updates) in order to deter exploit attempts.

Bolstering authentication

Additional IoT cybersecurity initiatives include two-factor authentication between machines, so IoT implementations can have an extra layer of protection with second factor, but without the need for manual human entry. There’s also the need for improved management of multiple user access for single devices through biometrics and advanced digital certificates. Data analytics and machine learning can play a role by analyzing information about IoT security issues and helping to develop the best future protections based on past events. Such analytics can also be used to spot threats in action by detecting anomalies and then automating preventative measures.

As the number of IoT devices continues to rise, and these devices alter how people work and play, there’s a corresponding need for protection. The industry as a whole needs to shift to improving security on the front end by refining security protocols and developing standards. And there needs to be complete security through the device’s entire lifecycle. Companies must focus their attention on all cybersecurity threats for the rest of 2018, with an emphasis on developing plans for IoT security.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 13, 2018  2:49 PM

Smart video surveillance delivers actionable insights

Christopher Bergey Profile: Christopher Bergey
Big Data, Camera, cameras, Data Analytics, Data storage, Internet of Things, IoT data, Real time analytics, Security Cameras, Sensor data, Surveillance, Video, Video surveillance

Security cameras represent a large global market driven primarily by the increased adoption of video surveillance systems for business or public-sector intelligence, as well as rising threats associated with public safety. As such, surveillance cameras are now common in and around government buildings, military posts, businesses, banks, transportation centers, casinos, shopping malls, sports venues, historic landmarks, schools and many more.

Surveillance isn’t just about security any longer, but in many cases, it’s about extracting value and intelligence from the video captured. This could include retail shopper behaviors, or in managing a parking facility, or when producing manufactured products. Seeing a drone flying through the air capturing images and video nowadays from a construction site or farmland is no longer unusual.

As such, the video surveillance market is experiencing burgeoning growth, and as of 2016, was valued at over $30 billion, with expectations of reaching $75 billion in revenue by 2022, at a compound annual growth rate of 15.4% from 2017 to 2022. What has changed in surveillance is not how data is captured, but how it can be used to drive actions, not only as part of a fast data application that analyzes data as it is captured, but also as part of a big data application that analyzes data when required. It’s no longer about just storing data, but what we can do with it once captured that is fueling a new generation of ‘smart’ video applications.

What is smart video?

Smart video is about this shift from imagery to insights, from simply collecting data for forensic and backwards-viewing to analyzing and understanding the context of the data captured. It uses artificial intelligence and algorithms derived from big data to provide immediate insights and forward-looking predictions. These fast data examples include:

  • Parking space management where analytics can be used to determine peak hours of operation, handicap parking use, areas of congestion, average parking durations and unmoved vehicles.
  • Machine production analytics can be used to determine yields produced, failures that occurred or about to occur, machine issues and inefficiencies, upcoming maintenance and peak hours of operation.
  • Customer retail buying preferences where analytics can be used to determine how many people entered the store, their gender and ages, in-store time spent, average spend and traffic generated by the new kiosk.
  • Agricultural drone surveillance where analytics can be used to survey a farm and surrounding land, diagnose vegetation and crop health, determine possible yields, and track livestock and food consumption, as well as insect and pest populations.
  • Smart city scenarios where analytics can be used to provide safety and evacuation information, and can coordinate with weather and traffic data to create the fastest evacuation routes out of a city.

The need to provide intelligent capabilities within video surveillance, coupled with the development of cloud-based surveillance systems, has led to the evolution of a smart breed of cameras at the network’s edge. These edge-based cameras have a powerful computing element and capable storage device implemented within that enables local capture and analysis (where the data is generated and lives), providing the valuable insights in real time, without the effects of network availability or latency.

Sample use case

Authorities are looking for a missing senior citizen with mental incapacities who may need help. They believe he entered a store and left. In a big data application, someone would have to review tons of captured video, looking backwards to find evidence of this lost soul in the store, and possibly perform some additional analysis on the data to determine his actions, identify the time he entered or left the store, and take some action. In this example, big data analysis is performed after the event has occurred.

Utilizing AI and algorithms from big data, fast data responds to events as they occur. Once the senior citizen enters the store, a fast data app can perform real-time facial recognition from the video feed, comparing the senior’s face to a database library of facial signatures. If the facial signature is detected, the application can trigger a security alert to help the senior in distress and get him back to his family safely.

The data storage strategy

As big data gets bigger and faster, and fast data gets faster and bigger, the storage strategy is to not funnel all of the video content to the main server, which is expensive and dependent on network availability, but instead, use a combination that stores data locally at the camera-level, as well as an edge gateway that enables video and data to be aggregated at various distances from the edge, and back to the cloud where big data content typically resides. A video surveillance system that uses edge cameras and this storage strategy will reap high system and service reliability, low TCO and the ability to scale without adding expensive recorders or servers to the surveillance system.

Final thoughts

Fast data applications for smart video are endless and have only scratched the surface of real-world use. We amass and generate large amounts of information from the increasing number of data points captured by such edge devices as surveillance cameras. Applying analytics to real-time captured data is driving new smart video applications whose video streams extract value and intelligence that drive actionable insights.

Forward-looking statements: This article may contain forward-looking statements, including statements relating to expectations for Western Digital’s embedded products, the market for these products, product development efforts, and the capacities, capabilities and applications of its products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 13, 2018  1:08 PM

Retrospective: How IoT changed the world in 2017

Ludovico Fassati Profile: Ludovico Fassati
Connected Health, Internet of Things, iot, IOT Network, LPWAN, SIM, smart city

Our world is changing faster than ever before. Driverless cars, virtual reality surgery, smart AI assistants — these innovations are no longer science fiction. Technology is evolving to meet our needs and rising to the challenges of our changing world. In particular, the internet of things has expanded immensely, enabling incredible innovation across industries. The past year was a pivotal one for IoT — we saw more applications of IoT come to market than ever before, with Gartner predicting there would be 8.4 billion connected “things” in use by the end of 2017, a 31% increase from 2016.

Looking back on 2017, it’s clear revolutionary new applications of IoT changed our world and helped to improve existing technologies. IoT has had significant impact in industries like healthcare and transportation. Additionally, progress in low-power wide area network (LPWAN) technologies have created potential for connectivity for an entirely new class of objects and systems. While IoT’s impact is far reaching, there were several key developments in 2017 that point to the future of this dynamic technology.

Connectivity helped doctors change lives

IoT is enabling the next generation of healthcare. Since connectivity allows for mass data aggregation, medical professionals have access to more information and insights than ever before. We’ve witnessed the impact firsthand working with Ekso Bionics, a pioneer in wearable exoskeleton technology. Ekso Bionics created the world’s first FDA-cleared connected exoskeleton for rehabilitating patients with stroke and spinal cord injuries. By providing connectivity to Ekso Bionics’ EksoGT exoskeleton suit, patient data can be shared with medical professionals in real time, providing a more complete picture of patient progress. These insights allow therapists to make adjustments accordingly, making rehabilitation safer and more efficient.

These methods proved so successful that in 2017, the company saw a 30% year-on-year increase in utilization of the exoskeleton. Today, there are more than 180 rehabilitation institutions around the world using the EksoGT to help their patients get back on their feet sooner. As this technology advances in the next few years, wearable exoskeletons have extraordinary potential to change the lives of individuals with stroke and spinal injuries.

Our everyday everything got smarter

While many IoT discussions may center on consumer applications like smart lighting and thermostats, in 2017, the industry saw significant focus on LPWAN. Low-power wide area networks are an emerging, high-growth area of the IoT market, designed for low-cost application with low data usage. They are a particularly compelling network option for hard-to-reach-places, as they feature long battery life with low latency and can operate in remote areas. LPWAN offers great potential to connect everyday objects that have never before been connected, like parking spots and garbage cans.

Narrowband IoT is a leading LPWAN technology that has emerged as a driver of innovation in smart city development, an area that saw significant progress this past year. Cities around the world are connecting infrastructure like streetlights and parking lots. These systems provide better traffic control and improve safety. They can also help drive efficiency by keeping energy and maintenance costs low.

Plugging into the ride-sharing economy

We may be on the brink of the autonomous car era, but today, some of the most impactful change IoT is making on the transportation industry is in ride-sharing.

Last year, we partnered with Mobike, the world’s largest smart bicycle sharing service from China, to bring its innovative sharing service to Singapore as a first step outside of China. Each Mobike is equipped with a smart lock embedded with IoT SIM, enabling users to locate an available bike near them through a dedicated app. This is an important aspect of Mobike’s services since there are no dedicated racks for the bikes. Instead, riders can securely park in any authorized location in a city, and the bike is then found by the next rider thanks to its connectivity. Additionally, while the bike is in use, GPS tracks usage information transferred via IoT SIM, enabling the aggregation of transportation data. Mobike has been hugely successful, creating an alternative ride-sharing option that is convenient for users and environmentally beneficial for cities. Looking ahead, the data aggregated from the bikes could be used to better understand city transportation infrastructure.

IoT innovations changed the world in 2017 with progress in healthcare, transportation and LPWAN offerings. We’re excited to explore how these developments will set the course for IoT in 2018 and the years to come.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 12, 2018  2:27 PM

Introducing authentic AI: Making artificial intelligence effective

Jay Klein Profile: Jay Klein
ai, Artificial intelligence, Deep learning, Machine learning, Neural network

Artificial intelligence is being celebrated as the innovation that will change the world. And while it undoubtedly has a multitude of applications and uses, it’s worth remembering that one size does not fit all. When a company looks to deploy AI technology, there are many business-specific challenges, so making the right choices can be tricky.

For example, just recently, yet another AI-related breakthrough was announced: A robot dog learned to open a door to allow another robot dog to walk through it. While it is well-acknowledged that the invested research and development for this mission was huge and the commercial potential for some applications is enormous, it is somewhat unclear how this specific innovation or the core models and algorithms of it can serve other industries and verticals. Herein lies the problem.

Gauging AI success in one field in many cases can be meaningless for another. To make things worse, even when trying to go deeper into the technology and attempting to evaluate, for example, which machine learning algorithms are utilized by the product, or what are the number of layers in the deep neural network models mentioned by specific vendors, in the end it will be possibly pointless as it does not directly reflect the technology deployment “success” implications.

Nevertheless, it seems that the market ignores this reality and continues to evaluate AI-based products by buzzword checklists using familiar and related AI terminology (e.g., supervised, unsupervised, deep learning and so on). While checklists are an effective tool for comparative analysis, it still requires the “right” items to be included. Unfortunately, what typically is absent are the items which are important to the customer, from a problem-solution perspective.

Introducing authentic AI

Given all of this, there is a need to change the narrative around AI technology to something meaningful and authentic that reflects the real-life challenges and opportunities that businesses are facing. This is the time to introduce authentic AI.

The Merriam-Webster dictionary defines authentic as both “worthy of acceptance or belief as conforming to or based on fact” and “conforming to an original so as to reproduce essential features.” This is not about fake to be contrasted with real; it’s about the essential features of AI which need to be acknowledged, and hence, redefine the “checklist.” Often, these essential “authentic” features are hidden and only surface when a CIO or CDO is faced with a new problem to be solved. This is seen especially when the AI aspects of a proposed product are fully explored by asking questions such as:

  • Is the AI technology utilized by the product aimed specifically for my problem, optimally (e.g., performance, cost, etc.)?
  • Is it capable of addressing the complete problem or only a part of it?
  • Can it be assimilated into the existing ecosystem without imposing new demands?
  • Can it address the compelling environmental conditions of the problem space?

These issues can be grouped into three different “classes:” original, holistic and pragmatic.

Original — How innovative is the solution? This can be quantified by assessing the following:

  • The invention of new algorithms or even new models;
  • The use of complex orchestration techniques; or
  • Through the capability to handle complex data formats and structures.

While there is no need to reinvent the wheel repetitively for any problem, there are distinctive characteristics which require optimizing.

Holistic — How complete is the proposed AI technology? It takes into account the capability of handling the end-to-end aspects of the offering, the competence of harmonizing the operation of the various AI components of the technology and the ability to adapt to ever-changing conditions of the AI application.

Pragmatic — Can the technology solve real-world problems in their actual and natural space in a commercially viable way? This means that, for example, the data sources can be processed in their most native format (unstructured or structured), as well as provide insights or results matching the pragmatic needs of the specific market expectations. In addition, the ability to be quickly deployed and rapid to act are assessed.

All of these elements should be used to systematically assess and evaluate AI-based products and technologies to assess their authenticity and therefore effectiveness in specific use cases.

For example, many home loan mortgage evaluation and recommendation systems utilize a somewhat isolated machine learning-based applicant classification method, one of many processes included within the product. The AI in this system cannot be considered authentic AI to a high degree as it scores low on the original and holistic classes as it isn’t innovative enough (from an AI sense). In addition, the AI component itself does not cover on its own the end-to-end aspects of the technology (hence, affecting the overall performance and precision). It could be considered to be pragmatic to some level if it can handle the required data sources of financial institutions or the customer applications natively, and if the technology’s “outputs” are the explicit results required as a specific recommendation (e.g., loan conditions). However, the deployment timeline (time-to-market) and commercial aspects need to be evaluated as well. This is just one example of many others, covering all kinds of variations.

So, in the case of the door-opening dog, although many people heard about it, its application is fairly limited — in fact, you could say that its bark is definitely worse than its bite.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 12, 2018  10:55 AM

Reimagining blockchain for the supply chain

Shanmugasundaram M Profile: Shanmugasundaram M
Blockchain, Customer service, manufacturers, Supply chain, Supply Chain Management

What is blockchain? And why has blockchain with the supply chain created huge expectations in the industry circles? First, blockchain is a glorified ledger which cannot be hacked to alter any transactions that have happened in the blockchain network. All the nodes in the blockchain know about all the transactions that have happened since the network started. Hence, to compromise a transaction, all the nodes or the majority of the nodes have to be compromised in a reasonable amount of time, which is not practically possible. Blockchain was proposed as a platform on which cryptocurrencies can be generated and distributed, and bitcoin was the first currency to use blockchain. Slowly, people started realizing that blockchain technology can be used for other use cases where a distributed trust or trustless network is needed. For example, contracts can be made smart by using blockchain to register and maintain them. Any change in contract can be tracked and the sanctity of the contract will be secure. The same goes for the supply chain, where orders placed on blockchain can be tracked from start through delivery.

Power of blockchain

Blockchain technology is all about tracking a transaction from the start until the end, and all the nodes in the network register and update themselves with the status of transactions. The power of blockchain comes from the fact that there is no central authority to maintain the record of transactions; the blockchain is distributed and therefore trust is also distributed and not negotiable. All transactions are shared among participating nodes, and any node that gets compromised can be easily isolated since for any transaction to pass muster, the majority of nodes should agree about the said transaction. This feature of blockchain is the key to its strength and can be adopted to variety of use cases. One such use case/domain where blockchain can be adopted is the supply chain, where orders are placed, tracked, delivered, invoiced, returned and so forth.

Blockchain in the supply chain

A supply chain is a business where orders and the fulfilment of orders and delivery of items are all meshed together in a gigantic mesh or, to some, a gigantic mess. Each customer who orders an item and the supplier who supplies them do not see eye to eye for most of the transaction’s phases. The customer who has ordered a certain item might have ordered the item for somebody else who happens to be his customer. The end customer will not have any visibility of his orders unless the first customer updates his system, provided he has a system, with the data from the previous system through which he has ordered items. Cumbersome is the kindest word here.

In short, it can be termed that any supply chain’s digital infrastructure is opaque to anybody except for a few who are direct consumers or producers. The reason for this opaqueness is trust — or lack of it. Except for entities that place orders to the entities that produce the ordered items, the systems are designed to exclude everybody else for the sake of security. It can go along like this, keeping all the other stakeholders guessing.

What if a producer would like to know how much his customer has of his merchandise in stock and predict when his customer will place orders?

What if a producer knows the rate of consumption of his product by a set of his customers, so that he can plan his production accordingly?

Just-in-time production anyone?

At present, these scenarios are heavily dependent on the customer willfully sharing details about his inventory or order history for anybody to see for them to plan accordingly, which is unlikely, for the very reason of security.

What is the answer to one of such many conundrums? Blockchain?

Now, imagine that a customer places an order on a system that has blockchain as underlying platform. In other words, the blockchain is the thing that registers all the transactions, including ones the customer has placed. The producer to whom the order has been placed will see the order and will start servicing the order. If said customer has another customer on behalf of whom the order was placed, the blockchain will enable the other customer to see the status of the order as well. The blockchain enables this kind of transparency since the underlying data of the blockchain is not hackable by any normal means — it would take enormous computing power and time to do any damage to data.

How blockchain can be used in the supply chain

The supply chain with blockchain can be called “supply chain platform with blockchain,” or SCBC. The blockchain has to be the underlying infrastructure for all the data flowing through a supply chain infrastructure. This data needs to be in the form of chunks of individual data that can be compartmentalized easily, such as an order, invoice and so forth. The compartmentalization can help preserve the continuity of the data into next morphing if needed. The SCBC should have a public interface where the interested entities can gain entry with proper credentials. These credentials can be issued in such a way as to enable the entities to see data pertaining only to their interests, and they cannot peek into other entities’ data.

How can access to the underlying data in a blockchain be given? One way — and the most popular — is to use APIs. If the APIs are standardized, any entity that would like to look into the data and act upon it will be able to do so without any hassle. These APIs would be part of any blockchain-based supply chain platform and will allow applications to be developed for which the imagination is the only limit. The use case explained above — an entity performing prediction of his own manufacturing based on his customers’ consumption patterns for the last orders — will be a reality.

In another scenario, an end user of a product can track the inputs of the product he bought. This might be of interest to consumers who are more interested in knowing that they are buying products that are manufactured in an ethical and environmentally friendly manner. This kind of consumer is on the rise, and blockchain will enable these consumers to be aware of their purchasing choices. While it is true that manufacturers themselves certify their products are manufactured in an environmentally friendly manner, there is no transparency in such a declaration and the end user has to take the manufacturer’s word with full trust, which is a difficult proposition.


Even though blockchain was first envisaged for creating, distributing and maintaining cryptocurrencies, lots of other uses are imagined on a daily basis. Use of blockchain in the supply chain is one of them, and will be adopted more or less in due course of time; the benefits outweigh the pitfalls.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 9, 2018  1:19 PM

How IoT is changing insurance

Maria Paz Gillet Martin Profile: Maria Paz Gillet Martin
"car insurance", Big Data, Insurance, Internet of Things, iot, IoT analytics, IoT data, safety, smart home, telematics

The insurance industry is rapidly evolving away from its traditional model of assessing future risks and pricing based on historical records and demographics (age, gender and the like). For decades, underwriters relied on this data to predict everything from an individual’s expected lifespan to the probability of a driver being involved in an accident. But, as it has with so many industries today, technology is disrupting this long-standing practice. In the case of insurance, the internet of things phenomenon has led this revolution.

Typically miniature in size, IoT devices record massive amounts of information on properties, vehicles and even people to which these gadgets are connected. The data then passes to insurers, which enables underwriters to judge potential hazards grounded on a more individual and property-specific picture of risk. Armed with this information, insurers can, in many cases, suggest real-time preventative measures that could avoid a costly claim payout.

IoT has already entered the insurance world. Here are four segments in which this technology is shaping the insurance industry of the future.

Car insurance

For decades, insurers wrote car insurance rates primarily based on demographics and age, which meant teenagers and people in their 20s were charged higher premiums, even though many may be perfectly trustworthy drivers. Today, however, many insurers offer drivers the option of installing a palm-sized IoT tool known as a telematics device into a car’s onboard diagnostics port, or OBD-II, typically located under the steering wheel. (Every car built after 1996 has an OBD-II.)

The telematics device monitors driving habits by recording the auto’s speed, distance traveled, time of day and whether the car has accelerated too quickly or made a hard brake. By analyzing this data, the insurance carrier can then evaluate if a person’s driving patterns make him a safe driver or a road menace. If deemed a safe driver, the insurer rewards the policyholder with a premium discount. Simply having the device in a car can make a person a better driver, too. For example, an alarm sounds if the driver slams on the brakes or speeds. Over time, these devices force drivers into safer road behavior, which then lowers their premiums.

The theory behind telematics is that drivers should pay premiums based on real-life usage rather than age, gender, education and even credit scores. Usage-based insurance bases rates on real-time data, like how far and fast an individual actually drives, instead of purely demographics.

In the near future, more insurers will underwrite auto policies using these tech gadgets. SMA Research predicts by 2020 three-quarters of auto insurers will offer telematics to their policyholders.

Discounts aren’t the only benefits, however. Unfortunately, car accidents happen, and pinpointing fault sometimes depends on the sometimes conflicting accounts of the driver or eyewitnesses. Fortunately, a telematics device delivers correct data of the conditions — speed, braking distance and the like — that may have led to the crash. If the insured wasn’t at fault, the data will support that finding.

Home insurance

The popularity of “smart homes” hasn’t soared simply because homeowners demand the latest in IoT meters that adjust a dwelling’s temperature and energy usage. Home sensors and connected IoT devices have implications for homeowner insurance as well.

Homes outfitted with a water monitoring mechanism, for example, can detect a small water leak before it grows larger and floods a home, causing thousands of dollars in damages. Alerting the homeowner via an email or text of the impending disaster prevents an expensive claim.

Similarly, security cameras and motion-triggered sensors can stop a break-in before a theft occurs. According to Safeguard the World, the chances of a home burglary rise by 300% when a home has no security system.

Fire detectors also signal the homeowner and local fire departments of a blaze that can be quickly put out before it destroys an entire house, thereby reducing a claims payout. Business Insider’s research arm, BI Intelligence, estimates a home equipped with a connected smoke detector that automatically alerts the fire department could potentially cut an insurance payout by an average of $35,000.

Life insurance

Life insurance underwriters have traditionally set life policy premiums based on an individual’s age, medical history and health habits, such as smoking, submitted by the applicant, who may sometimes fudge the truth. But with today’s wearable IoT devices, insurers can more precisely determine a person’s well-being by collecting vital medical data like blood glucose levels, temperature and heart rate.

Instead of grouping a policyholder in an age group that may not indicate that person’s true health, IoT data provides an accurate snapshot of an individual’s overall fitness. That information, in turn, reflects a person’s true health risks and how long she will likely live, which is then used to calculate premiums. Like safer drivers, healthier people will be rewarded with lower premiums.

In addition to premium calculations, data fed into an IoT health monitor allows insurers to suggest lifestyle changes that will improve a person’s health and therefore bring down rates for healthier individuals. Insurers are also able monitor a person’s health as it changes over time, which could have an impact on life insurance rates.

Workers’ compensation

According to the 2017 Liberty Mutual Workplace Safety Index, U.S. employers lost nearly $60 billion due to workers missing six or more days because of workplace injuries or accidents. Although that statistic dates back to 2014 (the most recent report from the Bureau of Labor Statistics and the National Academy of Social Insurance), the amount highlights the enormous cost of workers’ compensation insurance to pay for employees’ medical treatment and wages while they recover.

Yet, IoT offers employers a high-tech way to prevent those costly injuries. IoT instruments designed specifically to monitor a person’s physical condition pick up whether a worker is tired or performing too many repetitive motions. (The Liberty Mutual study ranked overexertion as the leading cause of workplace injuries, directly costing companies $13.8 billion each year.) Since fatigue often leads to injury, the wearable device buzzes a worker when it’s time to take a break and rest.

These IoT health sensors also check a worker’s body temperature, which could indicate an illness. Ill workers are more prone to accidents, so ensuring they recover at home maintains workplace safety.

IoT has emerged as a cutting-edge disruptor within the insurance industry, one that is leading to an exact measurement of risk that will ultimately lead to more precise underwriting not based on history, but on real-life situations and each individual. Correct underwriting means policyholders pay fairer rates, which makes insurance more attractive to potential buyers. Lastly, insurance carriers can switch from paying out a costly claim after the fact to preventing one from happening in the first place.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 9, 2018  10:21 AM

How predictive maintenance can transform industrial IoT

Erick Dean Profile: Erick Dean
Data Analytics, IIoT, Industrial IoT, Internet of Things, iot, IoT analytics, IoT data, Predictive Analytics, Predictive maintenance, Sensor data

McKinsey forecasts that the potential economic impact from IoT systems — including consumer surplus — could reach $11.1 trillion per year by 2025. Given the ongoing surge in IoT adoption, it’s no surprise that many industrial companies are rapidly seeking to introduce IoT data into everyday operations. While IoT has already proven itself to be invaluable for its ability to collect data from entirely new sources, it truly shines when that sensor data can be correlated and analyzed.

By adding a new dimension to existing IT information, sensor data is making it easy to predict and identify issues before they occur, and enabling organizations to make decisions in real time.

At most industrial organizations today, the lack of real-time visibility into critical industrial systems causes a reactive approach to managing operations. Problems are often solved via intuition, rather than embracing a data-driven approach. In these environments, unplanned equipment failure and system downtime costs a manufacturer millions of dollars in lost revenue and business opportunities.

As an example, let’s look at a wind farm using a reactive versus proactive maintenance for its wind turbines. Using a reactive approach, the wind farm’s strategy is geared around fixing problems instead of preventing them. By choosing to forgo a proactive, data-driven strategy, the wind farm could lose millions of dollars from failures and unplanned maintenance. One estimate is that 1% to 3% of turbines require blade replacement annually, and according to an NREL analysis, the cost of each blade is approximately $150,000. If 100 blades were replaced each year, the cost would be around $15 million. In reality, that loss of revenue is higher due to the cost of blade installation and possibly hundreds of other asset failures to address.

Because most IoT systems use real-time machine data, analytics can be used to calculate equipment health and efficiency, so organizations can predict and prevent failures before they happen. Machine learning and customized statistical models enhance predictive maintenance efforts by diagnosing alarms and anomalies in real time, accelerating issue response time without affecting production.

Getting a simple view of complex industrial data is priceless. It takes away the hefty price organizations pay for maintenance and unplanned downtime. By taking an analytics-driven approach to maintenance and integrating data across disparate industrial control systems, sensors and applications, organizations can eliminate data silos and shift from a reactive to proactive method for managing operations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 8, 2018  3:02 PM

Four keys to successful IoT product software: Using a co-development approach

Danny Aponte Profile: Danny Aponte
Communication, Integration, Internet of Things, iot, IoT devices, Partnerships, Software, Software development, Teams

Implementing IoT successfully and efficiently requires strategic coordination and integration of custom development efforts between embedded, app and cloud software components. A failure to do so has consequences and can delay or even cripple the entire system. At IPS, we approach custom IoT software development by considering requirements of the full stack from the outset. This has afforded us valuable learning on how to overcome the challenges of prioritizing development and creating faster, more complete IoT systems.

It is much like building a house. Imagine how a contractor coordinates the plumber, the electrician and the carpenter, prioritizing the work that needs to be done first and then ramps up the work that can be done simultaneously. Creating an IoT system is very similar to that process in that you need to prioritize the development of cloud, embedded and app software at the right time for it all to work seamlessly together.

The following are four key steps to creating successful IoT software.

1. Managing custom co-development. Managing separate software teams is a big challenge under the best of circumstances. To be effective, it is important to minimize overlap of embedded and application efforts as much as possible, and to prioritize when to start development of the various parts of your IoT deployment. To avoid the dangers lurking in parallel development, prioritize embedded system development early in your process so you can define a clear interface where the app can be integrated. Essentially, you want the embedded system to tell the app software, “Here’s what I can do. And what I can’t do.” Start your app team out on a slow burn and then as the embedded work ramps down, accelerate the app effort.

Whatever development effort you start with (embedded, cloud or app), try to remember the 60% rule. Get your first development effort to 60% complete before spinning up your next team. Once you hit the 60% mark, slow down the first team to stretch out the second half of the development schedule, maximizing the amount of overlap with the other development teams. This gives your early software teams time to mature their product and define clear integration endpoints, while providing a significant amount of team overlap to continue to be agile and address issues that arise later on.

2. Coordinating embedded and app team communications to ensure success. Give serious consideration to your teams’ proximity, and create a regular schedule of cross-team check-ins. This is particularly critical during testing, validating and debugging phases. Consider hosting a couple of cross-team scrums throughout the week, and creating a designated channel in Slack or your favorite real-time communications tool where the two engineering teams can post questions and design decisions that may impact the other team. Whatever your method, regular and deliberate communication between the two integrating software teams is an absolute requirement for the success of your multi-platform IoT connected system.

3. Hammering out a super-well-defined spec. Make sure everything is documented clearly and communicated consistently. Keep efficiency and system security in mind at each juncture and make sure your spec is crystal clear regarding the integration between both embedded and app teams. Lacking documentation on a small single-platform application can be a nuisance, but lacking documentation on a cross-platform connected system is disastrous. Your connected deployment must have clear, well-defined integration endpoints that are formally documented. Having a clear spec to share with external development teams helps to avoid many obstacles, including who’s right and who’s wrong when two integrated applications aren’t behaving properly; misguided assumptions, such as how a system or API is supposed to work; and inefficiencies, such as forcing other developers to learn your API by trial and error.

4. Maximizing priority on the integration phase. The challenges of this phase are immense and often terribly underestimated, resulting in late delivery and delayed product release. After the majority of development is completed in the individual embedded, cloud and app software projects, it’s critical that there is dedicated time for integration. No amount of planning can prepare your application for the problems introduced when you plug it into an external software platform or framework.

To address the challenges of working across the full software stack, this highly effective approach called “co-development” accurately prioritizes the various pieces of IoT product development to arrive at a complete, successful technology as quickly and efficiently as possible. By optimizing the integration process among the various teams and software components, a smooth pathway to your connected IoT system is created.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 8, 2018  11:54 AM

Wearables at work: Why enterprise usage is outshining consumer usage

George Thangadurai Profile: George Thangadurai
commercial, Construction, Internet of Things, iot, IoT devices, IoT sensors, safety, Sensors, Smart sensors, Smartwatch, wearable, Wearable devices, Wearables

Consumer wearables and smart clothing for fitness and health have often been compared to gym memberships; once the novelty of the new gadget fades so, too does the commitment to using it. Though definitive data remains to be seen, dozens of studies conclude that one-third of owners abandon their fitness wearables within six to 12 months, with up to 50% forgetting about them for extended periods — or altogether. This may partly explain why, at least in the U.S., the consumer wearables market has been advancing at a rate much slower than expected. ABI Research is just one of many insights firms forecasting the smart clothing market will top 31 million units shipped annually by 2021, but sees a decline after a peak in 2019 since consumer-grade products have few utilitarian purposes. Overall, though the worldwide wearables market will continue to gradually expand, IDC predicts it will take at least four years for the number of units shipped to double from the 113.2 million in 2017.

Replacing the canary in the coal mine

In contrast, wearable devices are finding more distinct and functional purposes among commercial industries. In a report from the research firm Tractica, which takes a business-focused and often conservative view of market opportunities, worldwide shipments for enterprise and industrial wearables are projected to increase exponentially from the 2.3 million units shipped in 2015 to 66.4 million predicted by 2021 — a tremendous jump in just six years. And it’s understandable, considering the need in helping to improve worker safety alone.

Up until the 1980s, the coal mining industry depended on canaries — song birds — to detect the presence of carbon monoxide and other toxic gases and provide an early warning to coal miners. It was a standard practiced for more than 50 years in the United States and Britain before digital technology enabled the “electronic nose.” Since its invention in 1982, some of the most advanced versions — worn on the wrist — are capable of detecting not only toxic gases, but also fire, light intensity, temperature, humidity and falls. A quantum leap for miners who work in conditions where suffocation and gas explosions are the foremost crisis, these smartwatches are quickly becoming the new standard for the mining industry of the 21st century.

Advancing the world’s first smart ring

Gartner recently reported that the sale of smartwatches alone will achieve nearly $17.4 billion of the total revenue potential among all wearables through 2021, indicating the growing popularity of these innovative devices. But wearables are nothing new when considering the 17th-century Chinese abacus ring, the world’s first smart ring that enabled bean counters to make mathematical calculations on one finger without the aid of pen and paper or the need for batteries. Thanks to advancing technologies, however, the abacus has evolved into an IoT device that can do far more than count heartbeats per minute and the number of steps an individual takes in a day. The newest incarnations include a panic button that sends an emergency alert to a predetermined response team. In fact, it’s that kind of wearable technology which has contributed to the burgeoning demand in precarious industries such as construction, biomedicine, manufacturing and transportation, where dangers from falls, heavy objects, liquids, chemicals and air-borne viruses pose a significant threat to human health.

Building a resilient workforce

In July 2017, enterprise wearables took center stage at the American Society of Safety Engineers Safety 2017 conference in Denver, Colo. Devices were put on display that have been designed to reduce workplace hazards and increase productivity. Devices focused sharply on the construction industry which, according to the Occupational Safety and Hazard Administration, remains one of the most dangerous businesses in the world, accounting for nearly one in four U.S. work-related fatalities each year. Remarkable technologies were unveiled, including a wristband that receives audible or visual alarm signals from wireless beacons placed in locations to track movement of large machinery and define restricted or unsafe areas. With capabilities to warn workers while simultaneously alerting supervisors and first responders, experts from ConstructConnect, a leading provider of construction information in North America, predict wristbands and wearables embedded into apparel and personal protective equipment such as gloves, boots, hardhats and safety vests could prevent thousands of injuries and illnesses suffered on construction sites each year. But that’s not all. With the proliferation of intelligent and IoT-enabled watches, eyewear, bracelets, head gear and footwear, among other things, a new era of improved safety for workers across all industries is on the horizon.

Below are just a few of the many devices showing great potential to reduce costly accidents and save precious human life:

  • Biometric wearables: Sensors embedded into clothing and accessories that monitor internal and external factors and can acquire information in real time to determine risks and send early warning signals. Tracking heart rate, body temperature and other vital signs, these devices can also monitor workers’ movements, repetitive motions and posture, as well as send alerts in the event of slips, falls, over-exhaustion or overheating.
  • Global positioning systems (GPS) and location trackers: In addition to accurately locating employees’ positions and generating their trajectory, GPS and location trackers can set secure safe zones and send warnings when breached, thus keeping workers a safe distance from dangerous areas, as well as hazardous chemicals and substances. For the logistics industry, especially the transporting of dangerous products, these monitors can accurately track staff and goods to ensure timely arrivals and troubleshoot in the event of delays.
  • Ruggedized smartphones: For industries where electrical, gas or chemical hazards, remote working conditions or extreme heat and cold present significant challenges, ruggedized smartphones provide an extra level of protection, with touchscreens and keyboards designed to function in extraordinary circumstances and respond to input through gloved hands. Outfitted with sensors and push-to-talk features, they allow seamless communication, monitoring and risk detection, with military-grade durability.

Providing a sense of well-being

Enabling IoT adoption and application across a wide range of devices, sensors are critical and central to the ongoing development and implementation of wearable technology. And if forecasts from the research firm IDTechEx are accurate, the worldwide sensor market estimated at $5 billion in 2018 will be driving a $160 billion market in 2028. No longer viewed as superfluous gadgets for fitness buffs and health fanatics, enterprise wearables will have the potential to reshape the workplace in every field, from science to agricultural.

Even as artificial intelligence and technology continue to advance, human nature hasn’t changed much since the invention of the first smart ring. At the root of some of mankind’s greatest achievements, the quest for simplicity, security, convenience and speed continue to drive us forward. Like the abacus — and the electronic nose — today’s enterprise wearables are offering workers the obvious benefits of improving productivity and reducing risks, but more important, they can empower workers with self-awareness. Armed with a sense of well-being and the motivation to make changes in their behavior, they’ll have the confidence to take charge of their health and surroundings — on the job and off.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 7, 2018  3:16 PM

What is the real state of IoT gateways?

Ben Pietrabella Profile: Ben Pietrabella
cloud, Cloud platform, Data Analytics, Edge analytics, Edge computing, GATEWAY, Internet of Things, iot, IoT analytics, IoT data, IoT platform, iot security, platform

The rapid adoption of intelligent IoT gateways has gone hand-in-hand with the proliferation of connected devices. Traditional data centers would struggle to cope with the tide of sensor-generated data and provide the interoperability required. Located at the edge, IoT gateways connect IoT devices to the cloud and bring to life any “smart” environment by providing connectivity and security while bridging different protocol adaptors with all the different types of cloud platforms — Microsoft Azure, AWS IoT, IBM Bluemix, etc.

These invaluable connection points work by “ingesting” data from IoT sensors. They transfer data to the cloud while at the same time receiving data from the cloud which they direct back to the devices. Intelligent gateways handle the different protocols to enable a seamless user experience, managing high-volume data flows and robust security.

Traditional gateways provided basic functionality. Today’s smart IoT gateways differ in a number of important ways. By performing the processing and analytics of data at the edge, they reduce latency and enable a whole host of critical use cases. They “edit” the data flow to pass on only what’s most relevant, and prevent the whole infrastructure from being deluged by data.

There have been IoT gateways for a number of years and they are valuable additions to the ecosystem. Nonetheless, how to deploy them — and when not to — is not yet widely understood. There are a number of factors to take into consideration when assessing whether it is right for an IoT initiative.

Designed for scale, performance and diversity

Intelligent gateways have become the go-to technology for IoT environments with many co-located edge sensors and where real-time responses to data are required. It is worth remembering that IoT is still in its infancy. As use cases and technologies proliferate, gateways need to evolve. The best smart gateways are designed to support several use cases across various domains, such as smart homes, energy and industrial scenarios, and require REST API-based SDKs for application development. Additionally, the gateways can support all major interfaces and can handle software upgrades, semiconductor platforms and a wide range of devices.

Faster analytics

Gateways perform crucial edge analytic functions. This is a major factor behind their rapid market adoption. They analyze data coming from a device before it is sent to the cloud, meaning analytics are performed faster and without the need for vast amounts of storage and processing power. Programmable edge analytics help implement innovative use cases in a short timeframe, and are particularly attractive in situations where autonomous decision-making is required for critical operations.

Better security

One of the main benefits of the IoT gateway is added security. Gateways both protect data in the cloud or travelling to the cloud, as well as ensure that external unauthorized parties do not attack or take over IoT devices. Gateways help with compliance, too, a huge benefit given the complex privacy and data governance issues surrounding types and location of data. They are configurable to comply with standards (such as those defined by oneM2M) and to support proprietary interfaces.

Improved time to market

Gateways have an important role to play in product realization. Gateways with pre-integrated protocol interfaces and use case scenarios, such as smart metering or smart parking, can reduce time to market by four to six months.

There are other ways that the new breed of gateways can speed things up. Modern gateway architecture enables organizations to add new protocol interfaces much more quickly. Portability across multiple OS and pre-integrations with SoC platforms also speeds product realization and, what is more, there is a significant reduction in total cost of ownership when deployed across multiple product lines.

Great… but is there a downside?

The truth of the matter is that IoT gateways have a lot going for them and are here to stay. However, edge analytics are a double-edged sword. Yes, great efficiencies are gained in terms of speed and processing power, but what is sacrificed is a lot of data, effectively “left behind” as the “relevant” data heads to the cloud. That’s great for many situations, but not all.

Similarly, some IoT situations simply don’t need the advanced capabilities intelligent gateways offer. This is true in situations when the device has enough built-in storage, when the network has sufficient bandwidth or when the task in question does not require a great deal of processing. However, advanced processing capabilities are needed and intelligent gateways are hard to beat.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: