IoT Agenda

September 14, 2016  11:46 AM

The future of IoT is unwritten

Dirk Paessler Profile: Dirk Paessler
Internet of Things, iot, Network management, Network monitoring, Network performance

The internet of things continues to gain steam, both in production and in the popular imagination. The term has started to gather the same amount of industry buzz that accompanied cloud, perhaps more given its mainstream appeal. And it’s rightfully earned, there is tremendous potential for IoT, and IT professionals are going to be a major part of what happens. From smart manufacturing and self-repairing machinery to smart homes and cities, the internet of things has unlimited potential. The future of IoT is unwritten, and IT professionals will have a serious role in writing it.

It’s easy to see the many advantages and efficiencies that will be gained from IoT. Taking manufacturing and industry as an example, intelligent machinery that orders its own replacement parts and can even repair itself will greatly reduce production downtime. At the same time, wearable technology promises to deliver massive improvements to health and safety through reporting on temperature, noise and other risk factors that will alert employees to dangerous environments.

As with industry, healthcare is another area where massive IoT gains are expected in the near future. Any number of medical devices, including heart monitors, pacemakers and wearable fitness technology, will be connected, giving healthcare professionals the ability to remotely monitor patients’ health. There are a number of ways for these connected devices to monitor and alert professionals and even call for emergency services based on patient heartbeat, temperature or other metric falling below preset parameters.

These benefits, which are only scratching the surface of what IoT promises, do not come without challenges. There are a number of concerns around network performance, manageability, security and stability that will all need to be addressed and solved by a variety of vendors and IT professionals.

In an IoT-enabled world, so called “dumb” devices will need to be connected to the IT infrastructure as well, which will create unusual workloads that will need to be benchmarked and monitored. This will also greatly expand the threat vector for would-be intruders, creating new cybersecurity challenges. Integrating, understanding, monitoring and securing an influx of new devices will require a great deal of planning and careful implementation — something that does not always happen when there is this much excitement around a new technology.

While the challenge of implementing and integrating all the “things” into the IT infrastructure is already substantial, what comes next is no small task either. All these connected devices are doing something on the network, and that something is creating massive amounts of data while also utilizing network resources. The volume of machine-to-machine data created by connected devices will require us to rethink the way we process and analyze data. Additionally, we will need to be able to visualize, contextualize and report on that data in a way that is usable. All of this needs to happen over networks managed and monitored by IT, without impacting existing business applications.

These challenges are far from insurmountable, but they are going to affect the future of IoT and how successful IoT ends up being, especially in the near term. The entire world seems prepared for the potential of this new technology, but its future is going to be written by armies of IT professionals architecting, implementing, managing and monitoring. It’s their opportunity to make a serious impact on the world and to move technology one giant step forward.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 13, 2016  2:30 PM

A foggy forecast for the industrial internet of things

Stan Schneider Profile: Stan Schneider
cloud, Connected Health, fog computing, IIoT, Internet of Things, iot

Signs on I-280 up the San Francisco peninsula proclaim it the “World’s Most Beautiful Freeway.” It’s best when the fog rolls over the hills into the valley, as in this picture I took last weekend:

Fog pic

That fog is not just pretty … it’s also the natural refrigerator responsible for California’s famously perfect weather. Clouds in the right place work wonders.

What is fog?

This is a perfect analogy for the impending future of industrial internet of things (IIoT) computing. In weather, fog is clouds close to the ground. In IoT, fog is defined as cloud technology close to the things. Neither are precise terms. But, in both, clouds in the right place work wonders.

IIoT GlossaryMajor industry consortia, including the Industrial Internet Consortium and the OpenFog Consortium, are working hard to better define this future. All agree that many aspects that drive the spectacular success of the cloud must extend beyond data centers. They also agree that the real world also contains challenges not handled by cloud systems. They also bandy about names and brand positioning; see the sidebar for a quick weather map. By any name, the fog, or layered edge computing, is critical to the operation of the industrial infrastructure.

Perhaps the best way to understand fog is to examine real use cases.

Example: Connected medical devices

Consider first the coming future of intelligent medical systems. The driving issue is an alarming fact: the third leading cause of death in the U.S. is hospital error. Despite extensive protocols that check and recheck assumptions, device alarms, training on alarm fatigue and years of experience, the sad truth is that hundreds of thousands of people die every year because of miscommunications and errors. Increasingly clear, compensating for human error in such a complex environment is not the solution. The best path is to use technology to take better care of patients.

The Integrated Clinical Environment standard is a leading effort to create an intelligent, distributed system to monitor and care for patients. The key idea is to connect medical devices to each other and to an intelligent “supervisory” computing function. The supervisor acts like a tireless member of the care team, checking patient status and intelligently alerting human caretakers, or even taking autonomous actions when there are problems.


The supervisor combines and analyzes oximeter, capnometer and respirator readings to reduce false alarms and stop drug infusion to prevent overdose. The DDS “databus” connects all the components with real-time reliable delivery.

This sounds simple, but consider the real-world challenges. The problem is not just the intelligence. Current medical devices do not communicate at all. They have no idea that they are connected to the same patient. There’s no obvious way to ensure data consistency, staff monitoring or reliable operation.

Worse, the above diagram is only one patient. That’s not the reality of a hospital; they have hundreds or thousands of beds. Patients move between rooms every day. The environment includes a mix of wired and wireless networks. Finding and delivering information within the treatment-critical environment is a superb challenge.


A realistic hospital environment includes thousands of patients and hundreds of thousands of devices. Reliable monitoring technology must find the right patient and guarantee delivery of that patient’s data to the right analysis or staff. In the connectivity map above, every red dot is a “fog routing node,” responsible for passing the right data up to the next layer.

This scenario exposes the key need for a layered fog system. Complex systems like this must build from hierarchical subsystems. Each subsystem shares internal data, with possibly complex dataflow, to execute its functions. For instance, a ventilator is a complex device that controls gas flows, monitors patient state and delivers assisted breathing. Internally, it includes many sensors, motors and processors that share this data. Externally, it presents a much simpler interface that conveys the patient’s physiological state. Each of the hundreds of types of devices in a hospital face a similar challenge. The fog computing system must exchange the right information up the chain at each level.

Note that this use case is not a good candidate for cloud-based technology. These machines must exchange fast, real-time data flows, such as signal waveforms, to properly make decisions. Also, patient health is at stake. Thus, each critical component will need a very reliable connection and even redundant implementation for failover. Those failovers must occur in a matter of seconds. It’s not safe or practical to rely on remote connections.

Example: Autonomous cars

The “driverless car” is the most disruptive innovation in transportation since the “horseless carriage.” Autonomous drive cars and trucks will change daily life and the economy in ways that are hard to imagine. They will move people and things faster, safer, cheaper, farther and easier than the primitive “bio-drive” cars of the last century. And, the economic impact is stunning; 30% of all U.S. jobs will end or change; trucking, delivery, traffic control, urban transport, child and elder care, roadside hotels, restaurants, insurance, auto body, law, real estate and leisure will never again be the same.


Autonomous car software exchanges many data types and sources. Video and Lidar sensors are very high volume; feedback control signals are fast. Infrastructure that reliably sends exactly the right information to exactly the right places, at the right time, makes system development much easier. The vehicle thus combines the performance of embedded systems with the intelligence of the cloud, aka, fog.

Intelligent vehicles are complex distributed systems. An autonomous car combines vision, radar, lidar, proximity sensors, GPS, mapping, navigation, planning and control. These components must work together as a reliable, safe, secure system that can analyze complex environments in real time and react to negotiate chaotic environments. Autonomy is thus a supreme technical challenge. An autonomous car is more of a robot on wheels than it is a car. Automotive vendors suddenly face a very new challenge: They need fog.

How can fog work?

So, how can this all work? I’ve hinted at a few of the requirements above. Connectivity is perhaps the greatest challenge. Enterprise-class technologies cannot deliver the performance, reliability, redundancy and distributed scale that IIoT systems need.

The key insight is that systems are all about the data. The enabling technology is data-centricity.


Fog integrates all the components in an autonomous car design. Each of these components is a complex module on its own. As in the hospital patient monitoring case, this is only one car; fog routing nodes (blue) are required to integrate subsystems and connect the car into a larger cloud-based system. This system also requires fast performance, extreme reliability, integration of many types of dataflow, and controlled module interactions. Note that cloud-based applications are also critical components. Fog systems must seamlessly merge with cloud-based applications as well.

A data-centric system has no hard-coded interactions between applications. When applied to fog connectivity, this concept overcomes problems associated with point-to-point system integration, such as lack of scalability, interoperability and the ability to evolve the architecture. It enables plug-and-play simplicity, scalability and exceptionally high performance.

The leading standard for data-centric IoT connectivity is the Data-Distribution Service™ (DDS™). DDS is not like other middleware. It directly addresses real-time systems. It features extensive fine control of real-time quality of service (QoS) parameters, including reliability, bandwidth control, delivery deadlines, liveliness status, resource limits and security. It explicitly manages the communications “data model,” or types and QoS used to communicate between endpoints. It is thus a “data-centric” technology.

DDS is all about the data: finding data, communicating data, ensuring fresh data, matching data needs and controlling data. Like a database, which provides data-centric storage, DDS understands the contents of the information it manages. This data-centric nature, analogous to a database, justifies the term “databus.”


Traditional communications architectures directly connect applications. This connection takes many forms, including messaging, remote object-oriented invocation, and service oriented architectures. Data-centric systems fundamentally differ because applications interact only with the data and properties of data. Data centricity decouples applications and greatly enables scalability, interoperability and integration. Because many applications may interact with the data independently, data centricity also makes redundancy natural. 

Note that the databus replaces the application-application interaction with application-data-application interaction. This abstraction is the crux of data centricity and it’s absolutely critical. Data centricity decouples applications and greatly eases scaling, interoperability and system integration.

Continuing the analogy above, a database implements this same trick for data-centric storage. It saves old information that you can later search by relating properties of the stored data. A databus implements data-centric interaction. It manages future information by letting you filter by properties of the incoming data. Data centricity makes a database essential for large storage systems. Data centricity makes a databus a fundamental technology for large software-system integration.

The databus automatically discovers and connects publishing and subscribing applications. No configuration changes are required to add a new smart machine to the network. The databus matches and enforces QoS. The databus insulates applications from the execution, or even existence, of other applications. As long as its data specifications are met, an application can run successfully.

A databus also requires no servers. It uses a protocol to discover possible connections. All dataflow is directly peer-to-peer for the lowest possible latency. And, with no servers to clog or fail, the fundamental infrastructure is both scalable and reliable.

To scale as in our examples above, we must combine hierarchical subsystems; that’s important to fog. This requires a component that isolates subsystem interfaces, a “fog routing node.” Note that this is a conceptual term. It does not have to be, and often is not, implemented as a hardware device. It is usually implemented as a service or running application. That service can run anywhere needed: On the device itself, in a separate box or in the higher-level system. Its function is to “wrap a box around” a subsystem, thus hiding the complexity. The subsystem thus exports only the needed data, allows only controlled access, and even presents a single security domain (certificate). Also, because the databus so naturally supports redundancy, the service design allows highly reliable systems to simply run many parallel routing nodes.


Hierarchical systems require containment of subsystem internal data. The fog routing node maps data models between levels, controls information export, enables fast internal discovery, and maps security domains. The external interface is thus a much simpler view that hides the internal system.

The key benefits of a databus include:

  • Reliability: Easy redundancy and no servers to fail allow extremely reliable operation. The DDS databus supports systems that cannot tolerate being offline even for a short period, whether five minutes or five milliseconds.
  • Real-time: Databus peer-to-peer delivery easily supports latencies measured in milliseconds and even tens of microseconds.
  • Interface scale: Large software projects with more than 10 interacting modules must carefully define, coordinate and evolve interfaces. Data-centric technology moves this responsibility from manual processes to automatic, enforced infrastructure.
  • Data scale: When systems grow large, they must control dataflow. It’s simply not practical to send everything to every application. The databus allows filtering by content, rate and more. Thus, applications receive only what they truly need. This greatly reduces both network and processor load. This is critical for any system with more than 1000 independently-addressable data items.
  • Architecture: Data centricity is not easily “added” to a system. It is instead adopted as the core design. Thus, the transformation makes sense only for next-generation IIoT designs. Most system designs have lifecycles of many years.

Any system that meets most of these requirements should seriously consider a data-centric design.

The foggy future

Like the California fog blanket, a cloud in the right place works wonders. Databus technology enables elastic computing by bringing the data where it’s needed reliably. It supports real-time, reliable, scalable system building. Of course, communication is only one of the required functions of the evolving fog architecture. However, it is key and relatively mature, thus driving many designs.

The industrial IoT will change nearly every industry, including transportation, medical, power, oil and gas, agriculture and more. It will be the primary driving trend in technology for the next several decades and the technology story of our lifetimes. Fog computing will move powerful processing currently only available in the cloud out to the field. The forecast is foggy indeed.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 12, 2016  2:48 PM

How IoT data boosts post-sale supply chain profits

Dan Gettens Profile: Dan Gettens
Analytics, Internet of Things, iot, Predictive Analytics, Supply chain, Supply Chain analysis, Supply Chain Management

The IoT tsunami is creating a huge opportunity to streamline and drive waste out of a costly and largely inefficient post-sale supply chain. However, most organizations are at a loss when it comes to using IoT data to solve these challenges.

A quick look at the challenges

How efficiently and intelligently you handle everything from service triage and parts shipments through inventory carrying and reverse logistics can make an enormous impact on your top and bottom lines. The post-sale supply chain fuels aftersales service, which accounts for up to 80% of core profits. Plus, in the face of an increasingly globalized economy and commoditization of many product types, post-sales service, if done right, can provide a much needed competitive differentiation.

The challenge is, the post-sale supply chain is highly complex with many moving parts and stakeholder interdependencies. Consider the complexity of trying to service millions of products in the field, each aging on a different clock, in constant customer use, with a high variation of service agreements and supporting a service vendor ecosystem all running on their own SLAs, process and systems. Exacerbating this is the fact that most companies lack the data, visibility and insights needed to optimize the service supply chain. That’s one reason problems like No Fault Found / No Trouble Found are so persistent, comprising 68% of returned consumer electronics products, for example. It’s also why companies often find themselves in crisis mode when it comes to shipping replacement parts for failed products and scrambling to address inventory stock outs: downtime can cost end customers millions of dollars.

The IoT fix

Every day your products tell you, via connected machine log files, how to improve your business. IoT log files provide detailed information about what’s happening with each product in the field, pinpointing current and potential issues with software, infrastructure capacity, configuration, hardware and more. When analyzed alongside other critical post-sale supply chain data — including voice of the customer, voice of the process, real-time and historical operational data — this voice of product data can have a significant impact on post-sale supply chain health and outcomes:

Service parts inventory to support parts dispatch
For the last 50 or so years, the mathematical models supply chain planners have used to calculate inventory requirements have been based on factors such as past demand, variations in demand, the amount of stock in the market and lead time from suppliers. This time-series approach, although standard throughout the industry, has proven less predictive and reliable than companies would like. To combat this, many of them overstock inventory so that when customers’ products break down, replacement parts can be readily available. But purchasing and storing all that extra safety stock is very costly.

Through joint research with OnProcess Technology, Massachusetts Institute of Technology recently developed a new model for spare parts forecasting and inventory planning that incorporates machine failure predictability into the equation. The study found that by using IoT data, you can significantly reduce both costly inventory stock and stock-outs — even with relatively low predictive power. The higher the failure predictability, the greater the reductions. This also enables businesses to improve their ability to meet service levels and, in the process, save millions of dollars every year.

Transportation order management to support parts dispatch
When products fail, vendors rush to ship spare and service parts. By general contract or business practice, parts are likely to be sent via costly next flight out, same day or two-day transport. However, imagine if instead of waiting for failures to happen, you could monitor the product’s log files to predict why and when a part is likely to break down. With this knowledge in hand, you can inform the customer of the pending problem and proactively ship a replacement part via slower and less-expensive means. Not only will this reduce transportation and process management costs substantially, it improves product uptime and, as a result, customer satisfaction.

Service chain triage
Inbound calls are the most reactive and least customer-friendly way of dealing with product problems. IoT data can help reduce inbound calls while increasing the use of more proactive and cost-effective means such as outbound calls and self-service portals. IoT opens a range of options from self-service, no touch, low touch, proactive outreach and premium services.

By sharing insights gained from a product’s log files directly with the customer via a portal, you’re providing the intelligence they need to resolve common problems themselves, and offering what is often a faster and preferred method of resolution.

For an even more proactive approach that also facilitates upsell opportunities, you can program log data to trigger alerts, telling your outbound calling representatives, for example, that a particular customer’s product has a part that needs attention. The rep can then contact the customer to suggest remedies such as shipping an advance replacement part, upgrading the product or offering a premium support service.

Remorse returns/no trouble found
When customers complain that products either aren’t working and need to be fixed/replaced, or aren’t performing as expected and, therefore, don’t fulfill their needs and should be returned, IoT-enhanced analytics can signal whether or not there’s an actual problem — before anything is replaced or returned. If the IoT data doesn’t turn up any issues, then it’s likely that the cause is a gap in education, where customers weren’t adequately informed during the point-of-sale or simply misunderstood or forgot how to use the product. By having your representative explain functionality and clarify services, you can avoid many costly remorse returns and NTF instances.

Reverse logistics
It’s standard procedure in reverse logistics to send returned products to a central receiving location, where they’re evaluated for repair, inventory or scrapping. Diagnosing each product’s problem can be time-consuming and delay the inevitable next steps. IoT data can accelerate this process.

By leveraging IoT-enhanced analytics, based on the installed equipment’s log file the case can be flagged as repairable or not repairable, before the product is returned. This enables you to eliminate the central diagnostic step, skip the receiving stop and route the product to the appropriate location right away. As a result, you can reduce reverse logistics costs, deliver parts to inventory faster and, when needed, introduce local control on material scrapping to reduce unnecessary repair and transportation costs.

The more visibility you have into what’s happening with your connected products in the field — and the more you integrate those log files into your post-sale analytics processes — the better you’ll be able to turn what could be negative, costly events into positive experiences for your customers, and money-saving, profit-generating outcomes for your business.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 9, 2016  11:58 AM

The internet of things: Changing the way we live

Harriet Green Profile: Harriet Green
home automation, Internet of Things, iot, smart home

We live in an age of immense technological and social change, much of which has been brought about by the emergence of such advanced technologies as cognitive computing, the internet of things and robotics. In fact, the current pace of change and the potential for further transformation is so great, that many see this as the fourth industrial revolution.

During my keynote at the IFA consumer electronics show in Berlin this week, I talked about how these technologies, powered by cloud computing, are changing the way we live, work, produce and consume — disrupting many of our existing models for business and innovation. IFA is an important forum for technology companies and the world-leading manufacturers of appliances and devices to come together and show a vision for the future. And what a vision that was …

Billons of sensors are improving our relationships with the physical world. We are giving objects eyes and ears so they can sense and interact with us better. As a result our relationships with buildings, cities, cars, devices and appliances are being transformed.

The challenge is that over next few years, the internet of things will become the biggest source of data in the world. That’s where cognitive computing comes in. Machine learning and other techniques help us understand this data and turn it into insight which can help automate certain tasks, enable manufacturers to design better products, innovate new services and enable humans to make better decisions.

I’m extremely excited about what we can achieve across so many areas of our lives, but I think it’s especially exciting to look at how IoT will impact our lives at home — the focus of my keynote at IFA for which I was joined by some of the biggest names in consumer tech who spoke about how cognitive intelligence is driving innovation in their companies.

A recent McKinsey study estimated that the value of people’s time spent doing domestic chores is around $11 trillion today and is expected to climb to over $23 trillion by 2025. That is staggering. IoT-enabled smart home “orchestrators” have the potential to streamline how we manage the home and all the tasks and chores within it. With natural language interfaces, access to all historical home usage data and machine learning features, these orchestrators can truly change how we manage our homes and related home activities.

For example, Whirlpool’s Norbert Schmidt joined me on stage at IFA to talk about how his company is using cognitive computing technologies to help deliver superior customer service by enabling its home appliances to connect with one another and to their users — opening up a new era of man-machine partnership in the home with better results for all. For example, a Whirlpool washing machine will communicate directly with a Whirlpool dryer letting it know what kind of laundry load to expect and the optimum drying program to use — saving precious time and helping to reduce energy consumption in the home. Using sensors and cognitive intelligence, appliances will learn about how people use them giving design feedback to Whirlpool’s engineers and offering new levels of assistance to consumers for reordering detergents, filters and other supplies directly from online retailers.

We also heard from Panasonic’s David Tuerk about how it is exploring how machine learning and natural language processing capabilities can help transform the services it provides to consumers — giving them greater peace of mind knowing that their homes are comfortable, safe and secure. One of the areas in focus is home safety and security where Panasonic’s security cameras and sensors to detect movement, glass breakage, door and window opening, will be coupled with cognitive computing capabilities. Thanks to video analytics, a home security system would know not to react if the neighbors’ children are just fetching their football, but will automatically alert the police or security services if likely intruder tries to scale a fence to enter the property.

I also spoke about how we are working with “hearable” pioneer Bragi to take the IoT directly to the ear. Bragi has innovated new generation smart earphones which are some of the world’s most powerful micro-wearable computers with 27 unique sensors that can measure a user’s vital signs while augmenting their communications and interactivity. With Kickstarter funding, Bragi has already successfully launched The Dash onto the consumer market for sport and recreation. Now with cognitive intelligence and natural language processing, Bragi’s new hearable technologies are posed to transform the way we interact with our devices, the way we communicate with each other and the way we work together. Bragi plans to apply its hearable technologies to transform the way people interact, communicate and collaborate in the workplace. The vision is for users to use the headset to receive instructions, interact with coworkers and enable management teams to keep track of the location, operating environment, wellbeing and safety of workers especially in industrial locations.

More connected homes are safer homes — especially for the young and elderly. We are in the midst of one of the largest demographic and technological shifts in the history of humanity. Our elderly population is expected to become the largest single age segment in the world by 2050. Our ability to create better outcomes for them while letting them live independently at home is better for patients, caregivers, family, loved ones and health providers as well. At IFA we heard from Nokia’s Cedric Hutchings about how his company is exploring opportunities to integrate Watson IoT with Nokia’s wearables and smart devices for home care. Their goal is a system that helps to detect and alert caregivers to potential problems such as: deviation from daily routines, abnormal vital signs and sudden changes in the home environment. Voice-activated interfaces in the home will be able to take simple commands (such as “call an ambulance”) and offer reminders to take medicines or turn off appliances.

I hope those who watched my IFA keynote left with a powerful sense of excitement about how exciting new technologies like the internet of things and cognitive computing are enabling such an incredible period of rapid technological change. I hope that I, along with my friends from Whirlpool, Panasonic, Nokia and Bragi, managed to convey the great opportunity we have before us to enhance our relationship with the physical world making appliances, machines, devices, homes, cars and buildings better, safer, more intuitive and interactive.

The most exciting aspect of the internet of things is its pervasiveness and accessibility. The cost and complexity of sensors and computer processing have fallen to such an extent that these technologies are really helping to democratize innovation for companies of all sizes and for people in all places.

We have an incredible opportunity before us to our enhance lives, innovate new products and services and transform societies across the world. We just need to make sure we seize the opportunity and put humans in the driving seat of the transformation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 8, 2016  2:40 PM

The LoweBot brings retail IoT to life

Gary Orenstein Gary Orenstein Profile: Gary Orenstein
Consumer data, Customer data, Customer experience management, Data Analytics, Internet of Things, iot, retail, retailers, robot

Last week Lowe’s announced the upcoming availability of a new in-store robot it has been testing for the past two years. The LoweBot specifically addresses two key areas of helping customers find things and keeping track of inventory, and will provide us a rich view into the mix of retail and IoT. On the consumer side, a friendly automated helper is far more hospitable than the challenge of finding a staff member. And on the inventory tracking side, the LoweBot, if successful for Lowe’s, could provide a giant leap forward to a real-time supply chain.


Source: Lowe’s and Fellow Robots

The retail industry is both big and fast. Big to the tune of $3.2 trillion in 2015 according to the National Retail Foundation and fast in that inventory needs to move or face a descending discount ladder of lost profits. To drive efficiency, retailers are likely to reach nearly $200 billion on enterprise IT spending for the worldwide retail market in 2016, according to Gartner.

Commercial robotics in retail

While the personal home robot has a long way to go before evolving past the Roomba, commercial robotics applications are an entirely different story. In a controlled and highly monitored environment, robots can go a long way.

Lowe’s partnered with Fellow Robots and spent two years running a pilot in one of the company’s Orchard Supply Hardware Stores, a San Francisco bay area chain Lowe’s acquired in 2013. While many stores are pursuing mobile applications, the future of retail will be less about pulling out your phone and more about experiencing a choice of ambient and face forward technology.

For example, a greeter robot at Lowe’s should be able to recognize me, use natural language processing to direct me to my desired item and tell me in advance whether it is on the shelf in stock. More sophisticated applications could suggest a gift or an item on sale that fits with prior purchases.

The IoT angle

The retail robot is just the visible part of a broader technology advance. With connected robots able to capture real-time inventory throughout the store and serve as a coordinated front line to field customer inquiries, a company like Lowe’s can dramatically revamp its supply chain.

Supply chain transformation
Gartner has recommended enterprises undertake supply chain initiatives in two 2016 reports. In May it published Retailers Must Redesign Their Supply Chain Networks to Support Multichannel Growth, and in March it published Retail Industry Outlook 2016: The Year for Accelerating Your Supply Chain Transformation. In that report, Gartner stated:

Retailers need to move urgently from talking about growing consumer expectations for shopping options and returns toward implementing the processes and technologies required to meet consumer demands.


Retailers who fail to invest in foundational operations, such as inventory accuracy and visibility, risk not achieving growth and scale objectives

Real-time analytics
With real-time data streaming in from a network of in-store robots, Lowe’s is in a unique position to manage its business more efficiently. It can continue to hone supply chain operations, identify trends in consumer behavior, better understand the interaction that takes place within its stores and on most fronts provide a better shopping experience.

Understanding and adapting
When digging into more detail on the Fellow Robots product page, one finds features of the product including inventory scanning and auditing, along with integration with ERP software. While these are high-level points, they indicate a future of data capture and application development that will help retailers understand their customers and adapt to their needs.

LoweBot features

Retail is now a relationship

For years, the drumbeat in retail has been about multi-channel marketing … that the consumer is expecting a seamless and connected experience across online and offline avenues. The LoweBot has the potential to carry that to personalized precision. If I’m able or willing to identify myself, the LoweBot can have my entire Lowe’s history, and if I visit one store, or several in my home area, each LoweBot will have a perfect picture.

In this most recent launch, Lowe’s is keeping things simple by focusing on LoweBot benefits such as:

  • Enabling more time for employees to focus on delivering project expertise and personalized service
  • The ability to scan inventory and capture real-time data with LoweBot will also help detect patterns or gaps that will ultimately influence business decisions
  • Simpler and more seamless interactions with its customers

However, I suspect this LoweBot experiment, if successful, will have a far greater impact on the the future of IoT and retail.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 7, 2016  12:53 PM

Why your IoT solution needs an application enablement platform (AEP)

Dima Tokar Dima Tokar Profile: Dima Tokar
IIoT, Internet of Things, iot, iot security, middleware, platform

IoT solutions are comprised of lots of technology components including hardware devices, connectivity, middleware and application layers. At its core, an IoT solution like fleet management, remote asset tracking, smart electric meters or smart buildings facilitates the delivery of IoT data to applications. While there are many ways to link IoT devices to applications, one of the best ways is by using an IoT application enablement platform (AEP).

People often ask why an IoT AEP is useful in creating an end-to-end IoT solution. In this article, I’ll take a closer look at the three reasons why you need an IoT AEP and why it is an important building block of any market-leading IoT solution.

Reason #1: You want your IoT solution to be reliable at scale. Today’s market requirements demand that a platform can both scale up to support millions of devices with different usage and technology characteristics as well as scale down to support limited-deployment, single application pilot projects. A quality IoT AEP has architected its solution to allow organizations to deploy devices cost-effectively and quickly with a pay-as-you-go business model. At the same time, as platforms allow varying scales of deployments, the reliability must remain high. Use of redundant and fault-tolerant architecture is a must to eliminate outages and data loss for all but the most extraordinary of circumstances. Building such IoT-ready infrastructure in-house is feasible, but requires a significant engineering investment. An IoT AEP provides a reliable and cost-effective platform upon which an IoT solution can grow.

Reason #2: You want your IoT solution to have sophisticated user- and role-based security. All secure IoT implementations have a robust and flexible way to create and enforce permissions schema to ensure that users, things and systems all have access to just the necessary subsystems. This security concept is known as minimally required access. In order to accomplish this, controls for authentication and authorization should be sufficiently granular to allow for complex permissioning schemes to be implemented and modified over time. Building an access control subsystem is rarely a core competency of enterprises deploying IoT solutions. But it is an integral component of a well-designed IoT AEP — another reason why organizations deploying IoT solutions are best served by using an IoT AEP.

Reason #3: You want your IoT solution to be future-proof. Your first IoT solution will not be your last. Be assured that you will want to make modifications to existing solutions over time and create completely new IoT solutions. Savvy enterprises will incorporate an industry-agnostic, horizontal middleware core, like an AEP, to reduce the complexity of future changes to their IoT solutions. IoT middleware — and its clean and well-documented southbound (device-side) and northbound (apps and backends) APIs — creates a clear and logical separation between the different layers of the IoT technology stack. This enables businesses to adapt to changing business needs and reduces the engineering costs of future IoT deployments and incremental solution changes.

AEPs solve industry-agnostic IoT requirements. The best IoT solutions will need to be scalable, secure and future-proof. Many of the problems solved by AEPs require significant investment and specialized know-how. These key traits of award-winning IoT solutions are also rarely a core competency of most public or private sector enterprises. Using an AEP enables such organizations to focus on differentiating their IoT solution.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 6, 2016  3:15 PM

What type of IoT data are we going to keep?

Mike Resseler Profile: Mike Resseler
Data Management, Data stor, Data storage, health data storage, Internet of Things, iot

Let’s start by answering this question with a typical IT answer: It depends.

There are many different devices that collect different types of data. In my first post I talked about electricity data, in my second post I discussed smart fridges and health trackers. But there are so many more examples that are existing already, becoming more and more mainstream, and in the future there will be new types of devices that will gather IoT data also.

In an ideal world, every little piece of IoT data that is collected should be kept for many years. Availability solutions would be a key and crucial part in this strategy, and whenever needed, you would have all data at hand from a very long time frame.

Unfortunately, we don’t live in an ideal world, and we don’t want to keep all the data for a very simple reason: Cost. Holding all that data would cost a fortune on disks. Even if you would use tape storage, it still would become rather expensive after a while.

Let’s illustrate this with a simple example and look at the data that is gathered. Let’s take a health tracker that gathers the following data:

  • Heartbeat
  • Number of steps
  • Run exercises + GPS map of the run
  • Sleep pattern

I know that most of the health trackers gather much more information, but let’s keep it simple for now. Obviously you want the end-user to have an overview of all of the statistics/data during the first week, maybe averaged per hour, and full details of the exercise. For the next three weeks, your online service will probably show averages per day and after that per week and even per month.

You might think now that your job is done and you have decided on what data you are going to keep and what data you will average, but you have to think further than that. Cost savings have applied because you are averaging the data and throw away the bulk of raw data. However, that might not be the best choice…

Imagine that you can hold the most relevant IoT data and gather that for many years. Imagine that scientists will use that data for medical research. Instead of doing a long, multiple year research, they suddenly have raw data at hand in the beginning of the project. Instead of a research with a couple of thousand people (which is costly), they have their hands on data of hundreds of thousands and maybe even more.

Imagine that your customer base is very loyal and buys (on a regular basis) your newest health tracker. Imagine that this health tracker has some new functionality based on certain data that is already collected by the previous one, and now can be put to use for that new functionality…

And there could be many more examples or future things you can do with that data. Devices that automatically arrange climate or electricity in your house can deliver data that can be used to tailor a service specifically per customer needs.

Fridge devices can allow a service that not only automatically creates a shopping list, but based on your eating/drinking patterns deliver health information on what you are consuming or even suggest alternatives for you. That data, again, could also be used for medical research and of course for marketing use. Many options become available (but do remember that there are privacy rules to data collection and selling that data…)

To conclude, what IoT data should you keep when collecting information from devices? It depends, but do know that you need to think this through and have to think about the future also. More services might need that additional data and/or that data can be used for other services also that could make a difference in the future. And while you are at it, make sure you develop a strategic availability service for that data, as we discussed in post one, which should be located at the back-end of your service.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 1, 2016  3:08 PM

Maturing business models and the ROI of IoT

Ryan Lester Profile: Ryan Lester
Data Analytics, Data Management, Internet of Things, iot, ROI

In new markets, the return on investment of projects can be minor, and in some cases negative in the short term. Companies invest to learn, to test market fit and to build a foundation for future success. The internet of things is now moving beyond these early days, and there are multiple ways to drive positive ROI from IoT connected products. Both direct revenue and indirect value from IoT data can quickly impact a company’s top and bottom line.

Increase ROI through more product sales

An increase in direct revenue from an IoT-connected product can come in a variety of forms. The most common is through new features that are only in connected devices. These new features create differentiation and value in the product, driving customers to buy the connected product instead of an unconnected product. The simple example of this is a connected thermostat. A connected thermostat allows users to remotely control their device through an app, utilize multiple sensors to provide more uniform temperature across a house or office, and integrate the thermostat with other systems such as lighting or window shades. All of these new connected features create smarter products which drives customer adoption and increased sales.

Increase ROI through recurring revenue streams

Companies can also increase revenue through new sales and service models. A simple example of creating a recurring sales model through a connected product is the Amazon Dash Button. You place this device at a location where you have a consumable, like laundry detergent or coffee pods, and when you need a refill you hit the button to get your next replenishment. Where Amazon falls short is that the customer must remember to hit the button and have enough buttons for all their consumables. For product manufacturers this auto-replenishment model is an exciting new opportunity to create a recurring revenue stream with an end customer. I have worked with printer, pet care, life science, agriculture and energy companies to create automated replenishment models that have dramatically increased the ROI of a connected product project. This extends into service models as well, where a product service contract can be signed with a recurring revenue stream that allows for remote device management and support.

Increase ROI through indirect revenue streams

Beyond direct revenue from the sale of a product, or parts and services supporting that product, there are a variety of other ways companies can make more money from connected products. One of the biggest areas that is being unlocked through IoT-connected products is data markets that allow revenue to be made through the data or features that a connected product enables. A great example of this is the work Con Edison, a utility company, is doing with their smart air conditioner program. By connecting consumers’ home AC units, Con Edison can promote better end user behavior to manage peak electricity demand. The data flowing from a connected product opens new opportunities to monetize that data. Another example of this is through new service models delivered using connected devices, like in the healthcare industry. By enabling connected products and devices, companies can create new ways to deliver services to their customers. It may not be the connected product itself that is creating a new revenue stream, but instead the collection of devices together provide a new service revenue stream.

Increase ROI through cost savings

The final area that companies can increase the ROI of IoT is through cost savings that include risk avoidance, improved product quality and more effective marketing. By having a product connected, a company gets a real-time view into the health and usage of that product. This allows a company to better manage product risk of failure, downtime and service costs. An early example of this is in the insurance industry with companies like Progressive and its Snapshot product. Consumers can lower their insurance costs by driving data back to Progressive, which allows the insurance company to better manage risk. Similar IoT data will also lead to higher quality products, as companies can better understand why products out in the market are failing. Connected products are disrupting a multi-billion dollar product, user and market fit testing industry. Companies will have an unprecedented view of how customers are using their products, what features they adopt first and if they stop using the product. This will improve the ROI of product development processes, marketing programs and other costs of designing and launching products.

As companies approach IoT and connected products, there are a variety of ways to create a strong ROI. There are ways to generate new direct revenue streams from connected products, indirect revenue from connected product data, and reduce the costs of developing and launching a product. With the IoT industry maturing rapidly, companies need to have both top- and bottom-line business goals in place for connecting products. There are numerous ways to make an IoT project profitable, so the question for companies today is how to prioritize their investment to get products to market quickly and to start gaining valuable new insights from that data.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 1, 2016  10:29 AM

Four building blocks for better IoT security

Aapo Markkanen Aapo Markkanen Profile: Aapo Markkanen
Blockchain, fog computing, Internet of Things, iot, iot security, Threat intelligence, Virtualization

Having spent a good part of the short and elusive British summer studying the inner workings of IoT security, I am now most delighted to be in a position to present on the findings of my research. The full report on the topic is accessible to Machina Research clients, but as always we have made some of the observations available to the wider audience in the form of the report abstract and the press release, and in the following I will be sharing even more of them with the TechTarget community.

For starters, I would dare to argue that “IoT security” as a term is almost an oxymoron. The security needs of a light bulb and, say, an industrial control system are, not too surprisingly, worlds apart, so it is somewhat unhelpful to put them under one huge umbrella. That, though, prompts the question whether “IoT,” as such, is also an oxymoron, which for an IoT analyst is a rather uncomfortable thing to ask, so it may be better if we don’t track that train of thought any further. Either way, given this backdrop, it should go without saying that the security needs in the internet of things are fundamentally application-specific and contextual.

As a consequence, any enterprise that is working on an IoT deployment should carefully assess the risks involved with conceivable security incidents, and then determine how much money it is willing to spend on minimizing and mitigating them. The results may vary by geography: for instance, the new GDPR framework in the European Union will inevitably alter the economics of IoT projects in Europe. The “right” level of security is seldom “as much as possible,” and quite often that right level simply can’t be achieved without prohibitively compromising the application’s user experience or its underlying business case. When the risks and the costs don’t align easily, the only choice is sometimes to scrap the project.

With this long-winded caveat against over-generalization in mind, let’s delve into a few technological enablers that could be realistically seen as the main building blocks for a more secure internet of things:

  1. Threat intelligence and analytics: Increasingly sophisticated threat intelligence, driven by big data, is largely seen in IT-centric cybersecurity as the key to protecting enterprises against the dreaded zero days, and this area holds a lot of promise in the IoT as well. In particular, this is interesting when it comes to industrial IoT. On one hand, IIoT applications tend to involve such a large number of possible combinations of different hardware and software elements, sourced from different suppliers, that understanding all of the possible security implications can be extremely difficult. On the other hand, once a machine-learning scheme has established a baseline for what counts as the application’s normal behavior, the robustness of its anomaly detection should be more reliable than is the case with applications whose behavior keeps changing more dynamically.
  2. Virtualization and hypervisors: Having already transformed app development and deployment in the traditional IT, virtualization will also be entering the IoT scene in earnest over the next couple of years. The hypervisor-based (i.e., full) virtualization techniques represent a game-changer especially from the security standpoint by allowing enterprises to isolate an application’s mission-critical and non-critical features from each other on a highly granular level. This, for instance, has the advantage of streamlining the burdensome equipment re-certifications, which are typically required if the code affecting a feature that the OEM has defined as critical to the device’s security and/or safety is changed in any way. The approach does not yet make IIoT exactly DevOps-savvy, but over time it can certainly help smoothen some of the friction related to the IT/OT convergence. Besides hypervisors, the Trusted Execution Environment, or TEE, is another isolation technique that will warrant extra attention, following its breakthrough in smartphone security.
  3. Fog computing and intelligent gateways: While most of the industry interest related to fog computing has to do with its potential for edge analytics, the technology concept has also a distinct security aspect to it. Particularly with IoT applications that comprise a high number of dispersed and resource-constrained endpoints, a sufficiently capable master node, such as an “intelligent” gateway, should be seen as an anchor for the less secure end devices that reside below it in the network topology. Distributing security capabilities to the network’s edge through IoT gateways enhances security in various ways — including e.g., identity and access management, device authentication, cryptography, threat intelligence and incident response.
  4. Blockchain and distributed ledger: While its terminologically more famous incarnation, the blockchain, is without a doubt overhyped in the IoT context, the distributed ledger is something to keep an eye on. In settings where the number of both end nodes and their transactions can be appropriately restricted, it could become the new norm for ensuring the integrity of data over the application lifecycle of complex IoT systems. The ledger model could, for example, prove valuable in securing deployments such as vehicles, assembly lines and weapons systems, by providing different software-defined system components with trust that none of the other components have been compromised. As such, it could be instrumental for the evolution of what we at Machina Research refer to as Subnets of Things.

In all four areas, the focus is mostly on securing enterprise IoT, yet elements of them can be expected to trickle down also into consumer IoT, where the security outlook, in general, is currently much less clear. On the enterprise side, the pros and cons of security tend to be fairly tangible, but it may eventually take a regulatory nudge or two — similar to GDPR — to improve the situation also in consumer-facing devices and services.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 31, 2016  10:55 AM

Dig deeper than devices: IoT solving challenges in the enterprise

Chris Witeck Profile: Chris Witeck
Business strategy, Internet of Things, iot, IoT devices, platforms

Every time you turn around there is some sort of bullish forecast on the number of connected devices enabled by the internet of things. One example is this forecast predicting over 24 billion IoT devices by 2020 (and 34 billion total connected devices). Yet with all of this hype, some will tell you that we are still not articulating the real value of IoT and its impact on the business ecosystem. Tim O’Reilly said in an interview last year that Silicon Valley is “massively” underestimating the impact of IoT: “But I think they are missing the point. They are creating some gadgets, but they aren’t thinking about systems.”

After looking at the enterprise IoT space for the last couple of years, I also believe that the impact of IoT on the enterprise will be massive, disruptive and, in many ways, underestimated. Why underestimated? Much of the hype around IoT today is based on consumer context: I have a smart thing that I monitor and control with a smart device that uploads smart data to the cloud so I can have smart analytics about my smart thing. This conversation can apply to all kinds of consumer scenarios, and often drives people to think of IoT in the context of smart things/smart devices.

IoT challenges in the enterprise

The reality is, in an enterprise environment, IoT will be much more complex than the individual smart device/smart thing relationship. It gets back to Tim O’Reilly’s point about systems. When we started talking IoT initiatives nearly two years ago, we found most people were not excited to talk about IoT as it related to a new sensor, device or gadget. Instead, they were intrigued when you talked about the potential for IoT to help solve complex business problems. As organizations become increasingly digital in all aspects of their business, new challenges in managing the connections, devices and applications that make up their digital business arise. This is especially relevant as enterprise applications evolve to be a collection of services and interactions spread across the cloud, on-premises systems and devices/sensors/things.

The value in this enterprise context is the potential for IoT to manage the interactions within this complex fabric of new sensors, gadgets, mobile devices, apps and old legacy applications and physical infrastructure. But, how exactly does IoT help manage this complexity? This is where you move the IoT conversation beyond things — where a “thing” is defined as some sort of sensor or device — and more toward how you can solve complexity by integrating things together and automating processes within the digital business. Essentially, IoT can help the enterprise deliver integrated experiences across applications, devices and cloud services in an increasingly digital world. This is where the evolving IoT platform comes in to play.

The evolving IoT platform

That in itself sounds challenging, but haven’t we been talking about integration challenges in the enterprise now for decades? According to IDC, “The IoT platform, at its most basic level, is the middleware that connects endpoints to applications, enterprise backend systems and analytics tools.” The IoT platform has the potential to be the orchestrator for the digital business; connecting the new and the old, managing workflows and leveraging these workflows to better understand the best way to operate the business. The challenge with IoT platforms is immaturity. There are all kinds of choices out there, with new platforms announced by the day, and probably no one platform that meets all of the needs for the enterprise. And, many of the IoT platforms have the look and feel of a developer tool set, requiring the enterprise to invest in building their own IoT solutions.

Much of this is expected, as it is still early in the evolution of the IoT platform. But, this can also slow adoption until IoT systems mature. Last year, Gartner stated that 42% of organizations are using or planning to use IoT solutions within the next 12 months. This also demonstrates that nearly 60% of organizations are not planning to use or deploy IoT solutions in the next 12 months. It would also not be surprising if many in that 42% are in the early pilot stage or testing stage. In conversations with customers, it is clear that in some markets, there are mature plans for enterprise IoT deployments (like manufacturing, smart cities), but we are just at the beginning of enterprise IoT adoption for many other industries like healthcare and education.

While IoT in the smart thing/smart device context is seeing successful traction, the current challenges with IoT platforms may make it difficult to determine when is the best time to launch an enterprise IoT project. To determine whether IoT is a fit, take a close look at your business needs and business processes, and ask the question, “Are there current business challenges and inefficiencies that could be addressed through better integration and orchestration?” As organizations continue down the path of digital transformation, as they move to connect apps, devices, services, data and people spread across the globe, IoT will increasingly be viewed as an opportunity to orchestrate away complexity for the enterprise.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: