IoT Agenda


October 6, 2017  2:10 PM

How IoT unlocks customer value: Five core truths of customer experience

Brian Hannon Profile: Brian Hannon
consumer experience, Consumer IoT, Enterprise IoT, Internet of Things, iot, Personalization, User experience

Traditional product manufacturing went like this: We built it, we tested it, you bought it. Job done. Product managers never heard from customers — and frankly, they didn’t want to. If the R&D team wanted to tap into any user data, some heavy lifting was required to really put an organization in the customer’s shoes. Despite very clever research methodologies, data required a great deal of interpretation to show insights.

But now everything is dramatically shifting. With the amount of data doubling every two years, it is instant and substantial. Customer touchpoints are exploding and product manufacturers need to grasp the potential of customer experience (CX) sooner rather than later if they don’t want to miss out on revenue opportunities. There is huge potential for many brands, beyond the applications, in how they use data to craft customer experiences.

So, from a CX perspective, how is IoT going to impact the value in the customer relationship? What are the CX mechanics that will impact this relationship and increase advocacy among users?

No CRM or brand advertising will come close to IoT’s capacity to engage customers emotionally and deepen customer value. For many brands, it will change the principles of customer relationship marketing in much the same way that digital changed the advertising industry. To further hone this point, I’ve gathered five core customer truths to explore the impact of data in engaging customers through IoT:

1. Prediction

You might be wondering how cloud-enabled sensors and actuators can change customer experiences. Put simply, a new customer journey is born. A new model where products predict and manage impacts on performance and delivery. As customers, IoT is removing the burdens of ownership and those points of irritation which can, collectively, send a customer relationship over a cliff edge.

One of the great burdens of electric car ownership is range anxiety. Or in other words, “Am I going to get there?” Of all the great features in Tesla, its capacity to use predictive analytics to optimize journeys and keep customers on the road is critical. Even before the driverless age takes off, data is looking ahead to ensure we’re getting better experiences behind it.

2. Insight

The new insights loop enables us to see products used and consumed through objective and vivid data. Not only is this kryptonite for R&D, it shifts the customer-product experience, enabling brands to share beautiful visualizations of data, ideas for optimizing usage and evolving technology, and comparative consumption patterns across use groups. And for customers, it enables them to explore their own consumption patterns.

In doing so, data becomes the product. The data generated becomes part of the product experience itself and transforms our expectations of functionality. Take Nest, for example. For decades, the home thermostat was a clunky one-way device to control your home climate. Now it’s a living, breathing, connected dashboard enabling customers to optimize energy usage based on real-time behavior. This not only results in money savings for the user, but also engages customers with an effortless brand experience. Knowledge becomes power for both the brand and customer.

3. Personalization

Personalization is arguably one of the biggest pain points when you call customer service. After running through a lengthy automated call screening process, you have to run through all your PINs and passwords again when you finally get through to a customer support specialist. It’s not just the time investment that bothers us, it’s the fact that so much of it is avoidable.

A smart IoT model changes this. Not only can data instantly identify connecting customers, but customer support can access product inventory, observe usage patterns and assure customers they’re in the hands of a CX support agent who genuinely knows who they are. Customer support, therefore, becomes truly personalized. An IoT + CX ecosystem delivers an almost concierge level of support.

4. Community

Sharing data across user groups and communities creates powerful tribes. Speak to any wearable tech advocate, be they a fun runner or cycling road warrior, and you’ll realize the fiercely competitive world of segments and leaderboards. Big data brands like Nokia Health realize the potential of community through competition, not just connection. This living data era is pairing those trusted running shoes with the cloud and generating astounding content to share with those you love (and love to compete with).

5. Humanization

IoT humanizes products like never before. A great deal of brand strategy is invested in humanizing products so we can unlock emotional connections. Any half-decent brand idea will talk in terms of human personality, to the point that customers could describe the brand as a real person in the room.

Brands can bring genuine personality to product experiences. You can finely tune the personality you project. You can intellectualize through data and build emotion through semantics, content and tone of voice. For a humanized product economy, it is simply revolutionary.

The expectation deficit

Ric Merrifield, the IoT CX consultant, cites the point at which acceptance and expectation converge. For the IoT industry to realize its potential, customers need to accept access to their data. This opens an “expectation deficit.” Customers understand how valuable that data is to the product owner, but where’s the payback for the customer in this relationship?

The IoT sectors and applications that are seeing early adoption are those that balance the legacy functionality (a watch to tell the time) with the data functionality (a watch that tracks your activity). Customers share the value of this data and the expectation deficit is balanced.

The greater challenge is posed to traditional industries migrating to the IoT era. The internet of things is really an “internet of customer experiences.” Data will grow organically and rapidly, so brands will need to harvest it for powerful relationships. It is time for clever product engineers to grow into brilliant CX engineers too.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 6, 2017  12:07 PM

Succeed in enterprise IoT: Narrow your focus and focus your vision

Chris Witeck Profile: Chris Witeck
Business strategy, Digital transformation, Enterprise IoT, Internet of Things, iot, IoT platform

According to a recent Cisco survey, 60% of IoT projects stall at the proof-of-concept stage. And of the 40% of IoT projects that made it past the proof-of-concept stage, only 26% were considered a success. That adds up to a pretty dismal success rate. Is this indicative of IoT being overhyped and oversold? Or, to borrow from Gartner, is this just IoT slipping into the “trough of disillusionment” before finding that “slope of enlightenment?” Having talked to many organizations evaluating IoT projects, many organizations perhaps set the bar too high with their initial IoT projects, and then find it difficult to tie together too many disparate systems and apps, and then keep track of everything. This was reinforced by the same Cisco survey that identified two main failure points for IoT projects: integration complexity and lack of internal expertise. Based on my research and time spent helping customers to deploy IoT projects, I’ve seen two crucial elements that set the projects up for success in the long run. Enterprises stepping into IoT initiatives, need to “narrow your focus” and “focus your vision” by starting with more narrowly defined IoT projects, drilling in on how IoT can be an enabler for solving specific business problems to set the project up for success.

Step one for IoT success: Narrow your focus

Already there is evidence of a shift to a more focused and specialized approach to building IoT technologies, something that may help to limit the overall complexity and integration challenges that come from using generic tools to solve a complex problem. One recent example recently was GE acknowledging that building a horizontal IoT platform stack is perhaps a challenge too large, and in order to succeed as an IoT service provider, you need to narrow your focus — and in the case of GE, that means selecting specific industries where it can focus its IoT technology efforts. While the big horizontal cloud providers, such as Microsoft and Amazon, may dominate at the compute and IoT platform layer, it is down at the orchestration layer where this specialization will take root. Essentially focusing the movement of information between users, IoT devices, enterprise applications and cloud services on a specific and narrowly defined business problem, while using tools and techniques optimized for solving that type of problem.

I witnessed this focus firsthand in conversations with healthcare organizations. After the initial broader and more technology-focused experimentation, these organizations then started to invest in a variety of very specific, very focused IoT workflows. Just some of the examples I encountered included automating the movement of patient data from a smart device to a medical record within a patient space, accelerating the movement of patient data to clinicians based on clinician location, or orchestrating the tracking of and locating of medical equipment in use across the organization. These were unique solutions focused on solving specific business problems as part of a broader digital transformation initiative. And rather than attempt to build these IoT workflows in house, they were instead looking toward service providers and system integrators with expertise in that area, or also looking to healthcare-focused startups to assist with these efforts. This is not unique to healthcare; similar examples can be found across any industry. This was reinforced in a recent survey from Vanson Bourne where 74% of enterprises said they planned to work with external partners in building their IoT systems.

Step two for IoT success: Focus your vision

This narrowed focus speaks to the evolving maturity of enterprise IoT and will help enterprises reduce complexity in their digital transformation initiatives. Yet, it only speaks to solving one piece of the puzzle. As organizations invest in these IoT workflows tying together devices, things, users, cloud services and on-premises applications, they are effectively stretching their network boundaries to anywhere these connections occur. While Metcalfe’s law speaks to the inherent value of these connections, managing and securing these connections will require a level of visibility that most organizations are not accustomed to. This will require complete east-west-north-south visibility and visibility into the event streams generated by every interconnected device, thing and application tied to any IoT workflow. And this visibility needs to be in real time to allow for rapid responsiveness in optimizing, troubleshooting and securing IoT workflows. While this sounds like you need to broaden your visibility by casting a wider net, in reality it actually means you need to focus your vision by putting your network data into the relevant business context. This necessitates taking a more proactive, analytical and business-focused approach to network visibility. The challenge is that this level of network agility, visualization and analytical capabilities has been traditionally unavailable to the enterprise. The good news is that there is an evolving category that speaks to this type of focus, something Gartner is referring to as “NetOps 2.0.” This moves from thinking of NetOps as a tool for optimizing network performance toward a methodology for helping business initiatives to succeed.

Summary

The reality is that most organizations are investing in IoT, with efforts increasingly tied to solving problems related to their overall digital transformation strategy. While many initial IoT projects may have floundered at the proof-of-concept phase, organizations have learned from those initial challenges and are starting to narrow their focus, defining concrete business objectives they want to tackle, working with service providers who are also narrowing their focus. Yet the enterprise must not forget to also focus their vision, ensuring as their IoT workflows touch devices, things, apps and users across the globe that they have the visibility required to ensure that these workflows, and the business objectives they are tied to, are optimized for success.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 5, 2017  4:08 PM

How IIoT platforms with AR/VR help OEMs reduce operating costs

Rick Harlow Profile: Rick Harlow
AR, augmented reality, IIoT, Industrial IoT, Internet of Things, iot, IoT platform, OEM, Virtual Reality

Kurt from our Houston office recently visited an upstream operation in Eagle Pass, Texas. At this operation, there was a variety of mission-critical equipment operating and collecting crucial production data points. It took Kurt a good six hours to get the facility. It was time-consuming, tedious and it cost money. We were asking ourselves a simple question: “How can we reduce Kurt’s visits to Eagle Pass by combining the 3D immersive experience of a virtual reality (VR) tool with the deep advanced analytical capabilities of an IIoT platform?” That question led to the development of augmented reality (AR)/VR apps that gracefully compliment an IIoT system.

Take, for example, a pump or a motor that commonly powers upstream operations. Our IIoT platform’s anomaly detection algorithms flag and mark cases of motor temperature overheating. These anomaly markers are laid out on a 3D model of the asset and reliability engineers, one sitting in Houston and another sitting in Oslo, can experience the unhealthy motor from the comfort of their headquarters. The sensor streams from the motor are streamed from historian tags in real time to the IIoT platform. The IIoT platform is then integrated with the AR/VR app, which enables the engineers to perform multiple asset examination operations. They can get “exploded” and “zoomed in” views of the asset and can rotate the asset across the 3D axis to pinpoint what is going wrong and where it’s going wrong.

In addition to experiencing the asset the reliability, engineers at headquarters can use voice and hand-based gestures to understand the sequence of events leading up to a high-value failure mode.

These features are extremely useful for optimizing upstream operations, reducing trips and shaving off costs in a hyper-competitive marketplace. As Harvey Firestone said, “Capital isn’t so important in business. Experience isn’t so important. You can get both these things. What is important is ideas. If you have ideas, you have the main asset you need, and there isn’t any limit to what you can do with your business and your life.” These new ideas promise to change the way OEM and operators run their upstream operations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 5, 2017  1:49 PM

Demystifying IoT and fog computing: Part one

Sastry Malladi Profile: Sastry Malladi
Data Analytics, Edge computing, FOG, fog computing, Internet of Things, iot, IoT analytics, IoT data

Two of the most common buzzwords you hear these days are IoT and fog computing. I’d like to put some perspective on this, coming in from the real-world experience from my current role as the CTO of FogHorn. I intend to divide this up in a series of posts, as there are many topics to cover.

For the first post, I’d like to cover some basics and context setting.

The “T” in IoT refers to the actual devices  — whether they are consumer-oriented devices, such as a wearable device, or an industrial device, such as a wind turbine. When people talk about IoT, more often than not, they may be referring to consumer IoT devices. There are a lot of technologies and applications to manage and monitor those devices/systems, and with ever-increasing compute power of mobile devices and broadband connectivity, there isn’t a whole lot of new groundbreaking technology that is needed to address the common problems there. But when it comes to industrial IoT, it is a different story. Traditionally, all the heavy and expensive equipment in the industrial sector — be it a jet engine, an oil-drilling machine, a manufacturing plant or a wind turbine — as been equipped with lots of sensors measuring various things — temperature, pressure, humidity, vibration and so forth. Some of the modern equipment also includes video and audio sensors. That data typically gets collected through SCADA systems and protocol servers (MQTT, OPC-UA or Modbus, for example) and eventually ends up in some storage system. The amount of data produced per day ranges from terabytes to petabytes depending on the type of machine. Much of that data could be noise and repetitive in nature. Until recently, that data has not been utilized or analyzed to glean any insights as to what might be going wrong if any (therefore, no predictive analytics).

The industry 4.0 initiative and digital twin concepts hover around the idea of digitizing all these assets and transporting all the data to the cloud, where analytics and machine learning can be performed to derive intelligent insights into the operations of these machines. There are several problems with this approach: lack of connectivity from the remote locations, huge bandwidth costs and, more importantly, lack of real-time insights when failures are occurring or about to occur. Edge computing or fog computing is exactly what is needed to solve this problem, bringing compute and data analysis to where the data is produced (somewhat akin to the Hadoop concept). In this article, I’m using edge and fog interchangeably; while some don’t agree with that — some people like to call the fog layer a continuum between edge and cloud — but for the purposes of this article, that difference shouldn’t matter much.

I know some of you may be thinking, “So what’s the big deal? There are mature analytics and machine learning technologies available in the market today that are used in a data center/cloud environment.” Unfortunately, those existing technologies aren’t well-suited to run in a constrained environment — low memory (< 256 MB RAM), less compute (single or dual core low-speed processors) and storage. In many cases, the technology may have to run inside a programmable logic controller (PLC) or an existing embedded system. So the need is to be able to do streaming analytics (data from each sensor is a stream, fundamentally time-series data) and machine learning (when the failure conditions can’t be expressed easily or are not known) on the real-time data flowing through the system. A typical machine or piece of equipment can have anywhere from tens of sensors to hundreds of sensors producing data at a fast rate — a data packet every few milliseconds or sometimes in microseconds. Besides, data from different types of sensors (video, audio and discrete) may need to be combined (a process typically referred to as sensor fusion) to correlate and find the right events. You also have to take into account that the H/W chipset can be either x86 based or ARM based, and typical devices (either gateways, PLCs or embedded systems) will be the size of a Raspberry Pi or smaller. Finding a technology that provides edge analytics and machine learning technology that can run in these constrained environments is critical to enabling real-time intelligence at the source, which results huge cost savings for the customer.

In my next article, I’ll talk about some of the use cases that are taking advantage of this technology and explain how the technology is evolving and rapidly finding its way into many verticals.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 5, 2017  10:17 AM

IoT and secure data centers

Rado Danilak Profile: Rado Danilak
ai, Artificial intelligence, chip, Consumer IoT, Data Center, Internet of Things, iot, IoT devices, IoT hardware, iot security, voice

IoT devices and use cases are exploding. Coupled with advances in artificial intelligence (AI), they are poised to transform our lives. Interfaces are moving from touchscreen to intelligent voice control. In order for IoT devices to become ubiquitous, they must be increasingly intelligent and, at the same time, low cost. That is the conundrum — product attributes that are in conflict with each other. The only solution is to amortize the cost of high-quality AI across many devices with centralized processing in hyperscale data centers. Designing every single light switch to have intelligence beyond Siri or Alexa would be prohibitively expensive. However, passing voice commands to data centers, where the cost of AI can be amortized across thousands of intelligent light switches, makes the cost of a high-quality AI system a thousand times cheaper. That light switch voice interface used is less than one minute out of 1,440 minutes per day. This will provide $1,000 worth of high-quality AI, at a cost of only $1 per light switch — which is a manageable price.

For example, an intelligent light switch can save energy by reducing light output when there is sufficient ambient light or the user is working in a different area. With an echolocator, it can pinpoint the person’s location without the need for a camera, which many people do not like in private environments. If a person suddenly ends up on the floor, an intelligent switch can determine if the person is doing yoga or if it is a grandmother who collapsed and needs 911 to be called. Or, if somebody at 3:00 am entered the room not through the door and it is wise to turn the lights on and/or call 911.

Another aspect is sharing knowledge. If we have a toaster with a small camera sensor to make sure our bread is toasted but not burned, such a toaster may experience a new type of bread, which behaves differently. Having our toasters networked means they can learn from each other’s experience. Again, hyperscale shared data centers provide this shared knowledge. Such a toaster could recognize the voice that is talking and make toast exactly the way that person likes it.

However, there is an additional ingredient needed for success. It is security. It will not be acceptable to use smart devices if a hacker can burn down somebody’s house by attacking a stove or toaster. It is not acceptable that somebody can listen to your private conversations. Thus security is must. Security drives further demand for processing performance at the data center — not only for encryption, but for AI software to determine if something is appropriate or dangerous, if something should or should not be done, and even whether a malicious hacker requested an action in question.

From intelligent refrigerators telling us not to drink the out-of-date milk, to stoves making sure food will not be burned, to microwaves which will not overheat food, to smart garage openers, home security — everything in our lives will benefit from AI-powered IoT devices with collective shared knowledge and wisdom — if they also have security. It can’t be accomplished without amortizing the cost of intelligence by concentrating it in hyperscale data centers, where AI cost is spread across billions of “intelligent” IoT devices.

All of this drives the demand for more processing power in these data centers. With today’s data centers consuming 40% more energy than Great Britain, more than the airline industry and with 15% annual growth, this means we will have two times more data centers, burning two times more power every five years! Humanity can’t afford >10% of the planet’s energy to go into data centers in 10 years, and definitively not 40% of the energy in 20 years.

In the past, Moore’s law gave us power reduction in semiconductor products from lower voltage and aggressive process shrinks. But now, with a performance and power plateau now in processors, we can’t rely on existing technology to offset growing power consumption demands like it did in the past.

However, companies today are working on new solutions to this problem, building chips that not only reduce data center hardware, but also reduces power consumption of public or private cloud data centers. Solving the processing power problem cannot be ignored if we want to make billions and billions of intelligent IoT devices safe, secure and cost-effective.

Those of you in the San Francisco area are encouraged to watch my invited speaker presentation at the Storage Visions Conference. I will address the growing data center energy consumption challenges, on Oct. 16 at 2 p.m. at the Embassy Suites in Milpitas, Calif.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 4, 2017  3:35 PM

Virtualization and IoT made for one another, but performance monitoring still essential

Dirk Paessler Profile: Dirk Paessler
Internet of Things, iot, IoT applications, Monitoring, Network monitoring, Network virtualization, Virtualization

In many ways, the internet of things offers the perfect use case for virtualization — an elastic and highly adaptable environment and framework capable of adjusting to the dramatic lulls and spikes that characterize the data streams between machines. In much the same way, IoT, with what is shaping up to be an innumerable collection of connected devices, requires the dramatic scale that most traditional networks can’t provide in a cost-effective way.

Of course, IoT isn’t the only driver of virtualization. The ability to dynamically allocate CPU capacity, memory and disk space to applications as needed is a powerful benefit in and of itself. Not surprisingly, for those just beginning their journey with virtualization, it can feel like performance, capacity and scale are unlimited — and in many ways, that’s understandable.

Even so, as IoT continues to add additional devices and data to these same networks, an inescapable reality becomes clear: It’s just as important to monitor the health and performance of your virtualized network and components as other parts of your infrastructure. In fact, reliable network monitoring plays an important and crucial role in dynamically assigning the very capabilities and capacities virtualization makes possible.

It’s also no secret that any network failure in a virtualized environment can have a dramatic impact on the applications in question, and for that reason alone, network monitoring is required to ensure that system administrators, network engineers and IT teams know as soon as any issue arises. That, of course, isn’t all. Following are some of the many reasons it’s important to monitor your virtual assets, as well as specific things you’ll want to keep an eye on to ensure that your IoT initiatives function smoothly with the virtualized network you ultimately put in place.

  • Use monitoring to ensure that you plan for the virtual assets your IoT initiative will demand. Increasingly, smaller organizations are looking to virtualization and the many benefits it offers for the first time as their IoT initiatives grow in scale and scope. Network monitoring technologies and capabilities should be considered a necessary investment for organizations at this important stage for a simple reason: It’s critical to understand what different applications demand of your network resources. Virtualizing systems without knowing the CPU and memory load, dish usage and network usage is very risky. Surprises are never good in any network, and virtualization is not a one-size-fits-all proposition. It’s important to plan your efforts based on the facts, not intuition.
  • Use monitoring to optimally assign resources. With virtualization, you always want to find the right balance. You don’t want to assign too few virtual machines to a host — and in that way waste resources — but you also don’t want to overload any virtual servers — and in that way slow or disrupt the performance of all of the systems and applications running off of it. Network monitoring technology enables you to see in real time how these virtualized resources are being used during periods of peak usage and lulls — information you can use to assign just enough of the resources at your disposal, but not too much.
  • Use monitoring to maintain quality of service. Today’s network monitoring technology not only enables you to see what’s happening in your network right now, but it also provides you with access to a historical record and view that can be used not only when troubleshooting any problems that arise (you don’t have to replicate or recreate scenarios to see what happened when a real historical record is available and easily accessed), but also compare how any changes to the network ultimately impacted the quality of service. Virtualization is no exception. Monitoring before and after you begin and complete your virtualization deployment will enable you to demonstrate just how much the virtualization effort improved the service experienced by users.

Fortunately network monitoring technology has advanced with virtualization, and today there are many sensors and technologies available that provide a real-time view not only of what’s happening in these networks, but also simultaneously in the traditional infrastructure and data centers they augment. Some of the many performance metrics and capacities that can be monitored in virtual networks include CPU usage as a percent of guests, CPU usage for each hypervisor, total CPU usage, read and write speeds, the number of packets sent and received by bytes or time period (such as seconds), network usage, disk usage, available disk capacity, active memory, consumed memory, the number virtual machines running, load average and the health of any host hardware, including temperature, power, fan rotations per minute and battery voltage. These are only to name a few.

A bright future, of course, lies ahead for IoT as we find new ways to make life better with connections that put more information than ever at our fingertips. But just as these connected devices will enable us to act with greater knowledge, the virtualized networks that make them possible will require far greater diligence on the part of networking professionals who must now, more than ever, take steps to ensure that they know exactly what’s happening across their networks at any given point in time.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 4, 2017  2:47 PM

Will consumers trust you with their connected life?

Eve Maler Profile: Eve Maler
consumer devices, consumer experience, Internet of Things, iot, IoT devices, IoT hardware, iot security, security in IOT, trust

As IoT technology becomes digitally woven throughout consumers’ homes, cars and wardrobes, the kind of relationships companies build with their users becomes ever more intimate and personal.

When the product you sell can hear what its owners are saying, capture data on their daily activities, help drive their cars and even watch over their sleeping children, factors like price, quality and durability fade beside the most important consideration of all: Can your company be trusted?

So when wireless home audio provider Sonos announced changes to its privacy policy and data collection practices recently, we got a strong reminder of how difficult it can be for companies in the emerging IoT space to balance security and user privacy. The changes seemed innocuous enough, pertaining primarily to the kind of data used to improve product performance and guide personalization. Indeed, Sonos went above and beyond standard privacy policy update practices, posting a detailed pre-announcement to the company blog, demonstrating a dedication to clarity and transparency in its new policy. The key sentence?

When making these changes, we took the time to work with experts in the privacy community to understand best practices and make sure the language we chose was clear, future-fit and avoided as much confusing legal jargon as possible.

Despite best intentions, however, many Sonos customers — and media observers — focused on the more austere aspects of the new policy. Notably, how it expanded Sonos’ data collection practices, but offered no opt-out. Customers who declined its terms would no longer be able to update their Sonos software, leaving their costly high-end system destined to lose functionality over time.

It’s a whole new IoT world

Should the Sonos privacy and PR teams have anticipated this backlash? Difficult to say. The race is on across all reaches of the IoT landscape — connected home, connected car, digital healthcare, smart city and so on — to launch new offerings with greater capabilities and convenience. There’s plenty of demand, and Sonos must innovate to meet consumer needs and stay competitive.

This dynamic is on display in the new Sonos privacy policy; it was announced along with the availability of a long-anticipated voice assistant enabling Sonos users to control music playback through spoken commands (as with Amazon Alexa). Sonos pointed out that collecting data around its new voice features was “needed to ensure proper functionality and to help improve these features.”

Let’s not lose sight of the fact that Sonos is in the business of providing a secure experience through wireless streaming media devices that live on your home network but must connect to third-party streaming services. Securing these devices and protecting the personal data of customers is the right thing to do, and in Sonos’ best interests. Yet pushing the technology envelope could possibly bring companies with IoT business models into conflict with the emerging privacy regulatory framework.

Of course, customers don’t necessarily care whether a company is in full legal compliance, but they care very much about whether it’s trustworthy. The 2017 MEF Global Trust Report surveyed consumers across 10 global markets and showed the importance of trust in today’s digital economy:

  • 40% of respondents named one or more trust issues as their biggest barrier to using more apps and services
  • 75% say they always or sometimes read a company’s privacy policy; 39% agree to such policies only reluctantly
  • 82% have taken action due to concerns over privacy and/or security, including deleting or discontinuing use of a service, warning friends or family, or switching to a competitive service

In this light, Sonos has taken a considerable risk in an area of real consumer sensitivity: how their data is used. In the MEF report, only 3% of respondents said that they were always willing to share data — half the previous year’s figure — while 39% said they never share it.

Keeping the (good) faith with consumers

Good faith is keenly important in an area like IoT, where the rules aren’t written yet. In spite of ample evidence of IoT security gaps that can endanger users of everything from cars to pacemakers, lawmakers have been slow to set standards and mandates. The Internet of Things Cybersecurity Improvement Act of 2017 under consideration by the Senate would cover only government contracts, not commercial markets. The EU’s General Data Protection Regulation, enforced as of May 2018, will likely bear heavily on IoT providers.

In the meantime, consumers shopping for IoT products don’t always keep security top of mind. What happens when a poorly secured IoT device compromises a customer’s entire home network, including financial and health records on Dad’s laptop? Or when a hacker gains control over a smart home vendor’s systems to wreak havoc across their entire customer base?

The slow, reactive nature of regulatory processes is no excuse for companies to stand idle about IoT security. Customers can quickly turn against a company merely for making a technical change to privacy policy. Imagine if they learn a product has put their financial well-being, home security or even their very lives at risk because of inadequate safeguards? Security must be woven seamlessly into the IoT product experience, performing flawlessly. Half of all respondents in the MEF study named a bad user experience as the top reason to lose trust in an app or service — more than negative reviews or reports from friends, family or the media. And don’t think consumers aren’t paying close attention: that same report found 80% of connected home adopters said they read privacy policies.

Your message to consumers should be clear:

Nothing is more important to us than protecting our customers. We’re taking every step to safeguard your data, fortify our products and prevent new vulnerabilities from entering your home. You have choice and control about the data you share through our products — not because of regulations, but because it’s the right thing for us to do.

Sonos isn’t the first company to struggle to find the right balance between privacy, security and data collection. It won’t be the last, as IoT becomes more pervasive. One lesson other players in the space should learn from this controversy? An open discussion of data practices and a dialog with consumers are crucial to reaching that balance.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 4, 2017  9:58 AM

Key considerations when transitioning from Pi to production

Justin Rigling Profile: Justin Rigling
board, GATEWAY, Internet of Things, iot, IoT hardware, iot security, prototype, Raspberry Pi

In my last article, I shared the pros and cons of Raspberry Pi and commercial-grade gateways for IoT prototyping. Many companies prototype with Raspberry Pi ultimately choose a commercial gateway for their production run. They realize during proof-of-concept prototyping that the Raspberry Pi simply doesn’t meet specific needs or perform as expected. If you are considering a transition to a commercial-grade gateway, read on as I discuss the best approach to make this transition.

The best time to switch from your Pi prototype is before you move into production. To determine which IoT gateway best supports your technical and budget requirements, start with these five key considerations:

1. Radio

Did you add a daughter card to your Raspberry Pi? If so, when switching from Pi to a different production gateway, consider why you added it and how you can select the best-fit chip in your production gateway. The right chip embedded in your production gateway:

  • Eliminates multiple wires and
  • Eliminates the need for an expansion board, as
  • Serial port connections are already made.

Carefully make these considerations, especially for small devices that might not have a lot of physical real estate. And, for multiple devices housed closely together that could potentially interfere with each other over the airways.

2. Certification

It’s important to note that the FCC classifies you as an “intentional radiator” when you take your Raspberry Pi prototype to production. As such, you must comply with CFR 15.249, which involves successfully passing hundreds of FCC tests. (You can read more here about being an intentional radiator.) Conversely, most vendors test and pre-certify their commercial IoT gateways to ensure regulatory compliance. When researching a commercial gateway, make sure to ask about the gateway’s certifications as this will free you from the significant time and expense associated with compliance.

3. Scalability

While commercial-grade IoT gateway designs accommodate growth, designers prototype on a small scale. In the process, don’t forget the size of your full-scale production and plan for the potential for even more growth. As you research commercial gateways, ask to load your own Linux image and edge applications at your own warehouse. Or, for a minimum order quantity, some vendors may offer to let you receive preloaded gateways directly from them.

Another vital question to ask is about the gateway’s exterior. Commercial gateways should have options that allow you to easily ship to locations where local technicians can easily install them given location-based requirements, such as PoE, and mounting options.

4. Software

Your programming language can impact how easy it is to move your prototype from Pi to a production gateway. If possible, choose a programming language such as Node.js, JavaScript, Java or Python that most hardware supports and migrates easily. If you have already programmed in an alternate language, you’ll need to research how easy or difficult it will be to migrate it from your prototype to any commercial gateway under consideration.

5. Security

Last but not least, you will want to assess gateways for security. Look for a technology that has security features built in. This security-by-design approach should apply to everything from the gateway’s firmware to secure communications with both IoT devices and the cloud. As the hub for communication and data aggregation and analytics, it’s important to ensure security. As such, make sure to ask vendors about secure boot features, tamper detection, encryption and if their gateway uses a hardware random number generator and/or a crypto engine.

IoT can revolutionize your business through cost savings, increased productivity and even new lines of business. A well-thought-out IoT gateway allows you to tap into the collective power IoT can have on transforming your business, and creating a prototype is a natural first step in reaching this goal. Choose wisely, even if that means going through several proof-of-concept iterations and you will be rewarded with a solid IoT product that is scalable, future-proof and happily used by your customers.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 3, 2017  12:49 PM

IoT’s surprising impact on revolutionizing inventory management

Sarah Hatfield Profile: Sarah Hatfield
Data Analytics, Internet of Things, Inventory, Inventory Management, iot, IoT analytics, IoT applications, IoT hardware, Supply chain

You know disruptive technologies have reached the tipping point when non-IT pros build business plans around them. This is exactly what’s happening with IoT. Because of its ability to drive wide-ranging, game-changing improvements, IoT is starting to be used across all aspects of business operations. One of the newest, and most impactful, areas is spare parts inventory management, a key aspect of the post-sale supply chain.

Maintaining the right level of spare parts is critical. As you can probably guess, carrying excessive inventory can be prohibitively expensive. But if you have too little, you’ll slow product repairs, hurt customer experience and end up spending more money purchasing new parts for stock replenishment. The problem is, traditional best practices for managing spare parts — using time-series algorithms combined with sales forecasting, seasonality, gut instincts and simple rules of thumb to determine how many parts to stock — are woefully inaccurate because:

  • They’re static, “review-and-stock” endeavors based largely on historical demand data
  • The algorithms don’t account for variables resulting from failed parts in the field

Knowing this, many companies hedge their bets by purposefully overstocking. Others think they’re maintaining the right levels, but unknowingly overstock. In either case, they’re wasting a lot of money.

New IoT-driven inventory planning

The key to accurately stocking parts is knowing which ones are likely to fail and when they’ll need to be replaced. Some businesses have attempted to use IoT data to understand product failure impacts on inventory planning. However, most of the IoT monitoring programs are designed to respond to signal failures. Plus, IoT data collection is often haphazard and emphasizes the few pieces of equipment that are starting to fail, rather than the whole. This makes it impossible to generate a sound baseline for analyzing product performance and predicting failures — which, in turn, makes it impossible to accurately forecast spare parts needs.

The good news is there’s a new inventory planning algorithm that builds IoT-based failure data directly into the equation. Developed at Massachusetts Institute of Technology Center for Transportation and Logistics, it enables businesses to accurately forecast needs. By using this methodology and analyzing historical failure data on the entire installed base, businesses can predict the exact spare parts they’re likely to need, when and in what quantity.

The better news is that it doesn’t take a huge team to capture IoT data because not much data is needed. As few as 1,000 signals can drive substantial stock reductions.

The best news is that using the IoT-driven parts inventory methodology can help reduce stock by 6-10%. Imagine what that could do to your bottom line.

Getting the better of your instincts

When I tell inventory management execs they can and should dramatically reduce spare parts inventory, they, understandably, have a visceral “no way” reaction. The instinct is to increase, not decrease, stock. Nobody wants to be caught with a shortage. The consequences of missed service-level agreements and angry customers are too fierce.

But when they see the data, run the numbers and learn that they will also be able to maintain — or even enhance — service levels and fill rates by building IoT parts failures into their planning, interest is piqued.

  • Reduce inventory up to 10%
    In addition to tightening spare parts forecasting, businesses can be more proactive and prescriptive when taking actions to avoid and quickly fix machine failures. This, too, contributes to equipment uptime and reduces the need for spare parts.
  • Save substantial money
    Carrying costs, for example, physical space, parts handling, and deterioration and obsolescence, average about 25% of inventory value. Thus, even just 6% stock reduction can save hundreds of thousands to millions of dollars, depending on the value of the parts.
  • Maintain or improve service levels and fill rates
    By aligning service operations with detailed maps of parts demand patterns, businesses can increase responsiveness. Not only are they holding less stock, they’re better able to fulfill parts requests and deliver a timely, improved customer experience.

Advanced analytics drive even more savings

Knowing which parts in the field are likely to fail and, therefore, which spares you’ll need at any given moment, has implications beyond inventory management. You can use that same IoT data to drive a closed-loop system with asset recovery, focusing in a very surgical way on bringing the most in-demand parts back into inventory.

By applying operational segmentation, you can identify tiers of parts to prioritize for returns, and then create automated business rules that target those high-value/high-demand parts so they can be recovered and refurbished quickly. This reduces new buys for stock replenishment, saving even more time and money.

So why not check your gut instinct at the door, and use IoT to reduce spare parts stock, keep customers happy and slash costs? It’s safer and easier than you think.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 3, 2017  10:44 AM

Mix predictive maintenance with IoT and rule public transport

Ruben van der Zwan Ruben van der Zwan Profile: Ruben van der Zwan
Customer satisfaction, Customer service, Internet of Things, iot, IoT analytics, IoT data, Predictive Analytics, Predictive maintenance, Smart transit, transportation

When you take a train, you expect it to drop you off on time and without being squashed like a sardine in a can. This doesn’t sound like very much to ask, yet it is. The public transport sector struggles to have its trains, buses and ferries run on time, and prevent breakdowns and accidents while keeping travelers well informed and satisfied. Many of these problems could be avoided by better maintenance. Sadly, maintenance done right takes time and money, and maintenance done wrong takes more time, more money and sometimes it even takes lives. If only there was a way to tailor your reparations and speed up downtime. Oh wait. There is. Welcome to the era of the internet of things.

Standard checklist

Public transport vehicles are taken off the road for maintenance regularly. This downtime is crucial, as deterioration such as worn wheels and bad brakes can cause delays or even fatal accidents. To avoid any risks, vehicles are inspected every couple of months, even if there’s nothing wrong. Mechanics check them using a standard checklist, clear them for the road and send them on their way. This predictive maintenance method has two major disadvantages. First, lots of time and money is spent on the maintenance of trains and buses that don’t need it. Second, a standard checklist is not always sufficient, as public transport vehicles are subject to many different circumstances. They differ in rides per day, occupancy level and the kind of service they provide. City buses, for example, will show different wear and tear compared to long-distance buses. Moreover, weather conditions have a large impact on the state of vehicles and can differ per region. Regularly scheduled maintenance may help public transport companies in fixing the larger part of the defects; it won’t help them improve their services.

Don’t predict the state of the engines, know the state of the engines.

The internet way of fixing things

There’s nothing wrong with predicted maintenance; there’s something wrong with the way people make the predictions. Today, most decisions in public transport maintenance are made based on earlier experiences and data. But as I pointed out, there’s just too much variation in the way public transport vehicles are being used. If you really want to know about the state of your buses, trains or whatever it is you’re driving, you should look at your data in real time. Don’t predict the state of the engines, know the state of the engines. With this information, you can schedule tailor-made maintenance for the vehicles that need it and leave the rest alone. How do you do it? You simply do what everybody else does when there’s a problem: you include the internet. By attaching sensors to the different parts of your vehicles (think engines, brakes, batteries), you can request information about them wherever and whenever you want. APIs will gather the input and translate it into workable data that you can use to set in motion the follow up. To do so, you determine thresholds per vehicle part, so that when a parameter goes out of the normal range, the vehicle is taken of the road for maintenance.

The long list of benefits

There’s more to this new way of predictive maintenance than you might think. First of all, tailoring your services based on real-time data will help you save costs. When mechanists know what needs to be done beforehand, they can get rid of the standard checklist and move on to the actual defects. This will shorten the downtime of the bus or train and get it back on the road faster. Second, it will improve the reliability of the vehicle thanks to better targeted maintenance and real-time insights. From now on, you will know about the low tire pressure before the tire deflates, which results in less unplanned stops and unpredicted downtime. Another advantage is that you don’t need as many mechanists and parts as you used to because you only need them when there’s an actual defect or crossed threshold. Lastly, you can optimize the way you communicate with travelers. When you use the internet to monitor the state of your vehicles and to schedule maintenance, you can also use it to inform people on available buses, trains and where they can find them.

IT makes it happen

What does internet-based predictive maintenance look like in real life? I personally like the story of Trenitalia. It uses the internet of things, sensors and data analyses to keep its trains in top shape, and was able to shorten downtime, cut costs and make better predictions. It is one of many success stories, and it proves that we’re close to completely changing the public transport sector. And it needs it, as there’s no other sector in which more people go from one place to the other. They want to be transported in the safest and fastest way possible, while having as much information on their journey as possible. I truly believe that IT can make that happen. Switching to such a digital strategy requires a lot of thought and strategy, though. If you want to disrupt your sector by staying one (or two, maybe three) steps ahead of everyone else, you need technology, knowledge and, most importantly, a cultural switch within your organization. Take all that, mix it with IoT and a great (data) infrastructure, and you will rule public transport.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: