IoT Agenda


August 22, 2019  10:32 AM

How IoT and AI are fueling the autonomous enterprise of the future

Mike Leibovitz Profile: Mike Leibovitz
AI and IoT, Automation, Internet of Things, iot, IoT in retail, IoT verticals

IoT technology has reached critical mass. Today, there are more than 10 billion IoT devices in use around the world. Approximately 127 new devices are connected every second, according to PYMNTS’ monthly Intelligence of Things Tracker report. That’s more than 2,000 new connections since you started reading this article.

While much of the IoT conversation focuses on the devices themselves, the true potential of IoT extends well beyond hardware. Instead, it’s in the data a device generates, the action it instigates and the ultimate value it delivers. For example, sensors deployed in a grocery store are important because they send real-time data about stock levels to store employees so they can manage inventory accordingly.

As the volume and sophistication of connected technology increases, IT leaders must ensure devices, architecture, automation and human intelligence are working in harmony to create superior end-user experiences. This is a framework known as the autonomous enterprise. Let’s explore how to combine IoT technology with AI and automation to create the autonomous stores, classrooms and smart cities of the future.

Modernizing retail stores

From buy-online-pickup-in-store options to mobile point-of-sale systems, new IoT technology makes the check-out process and in-store experience more seamless and convenient for customers.

Increasingly, retailers also experiment with video analytics and sensors to automate inventory management and track product sell-through rates. For instance, grocers use IoT devices, such as temperature sensors, to preserve cold and frozen goods, ensure food safety and minimize spoilage.

Store managers can also deploy shelf sensors to analyze which products a shopper removes from a shelf, and then replaces without purchasing. This data is valuable for store owners and brands because it provides insight about consumer shopping behaviors and helps answer questions about why a product has a strong or poor sell-through rates. Did consumers engage with your product, such as by picking it up, or were they more attracted to a different, neighboring product in the store?

Furthermore, automated real-time notifications about inventory levels can ensure shelves are adequately stocked for shoppers, and it can save employees time on manual inventory monitoring tasks so that they can be more readily available for customers.

The potential for IoT also extends to the backend of retail operations, and helps optimize companies’ supply chains. Kroger is among the leading innovators in the grocery retail category, deploying IoT and automation technology such as robotic carts and robust warehouse management systems in its distribution and fulfillment centers. These technologies help employees expedite picking and packing processes. The combination of technology and human intelligence increases the efficiency of retail operations and helps deliver exceptional customer experiences.

Powering digital classrooms

Previously, I highlighted how classroom environments are becoming increasingly digital. Smartboards, robotics and video live-streaming technology are just a few examples of the kinds of IoT technology used in schools today. According to a Deloitte survey, 80% of teachers use digital education tools at least once a week.

However, the influx of devices presents new challenges for IT teams and school networks. A robotics lab supporting a STEM lesson plan or a video live-stream broadcasting to remote students are both bandwidth-intensive. When Wi-Fi traffic is not properly managed during these activities, students and teachers will likely face connectivity challenges, and it will cause significant network latency for the rest of the campus. In turn, network downtime in schools impedes educators’ curriculum, and disrupts digital-dependent learning such as online testing or smartboard lectures.

School administrators can supplement their Wi-Fi network with AI functionality to improve radio frequency efficiency, and expand wireless capacity to meet new bandwidth demands across the campus automatically and on-demand. Specifically, AI can seamlessly optimize the wireless network in changing environments such as the cafeteria or gymnasium where RF characteristics can vary significantly depending on the number of people and devices present. With strategic automation, students, faculty and staff can rely on consistent connectivity.

In addition to cultivating immersive learning environments for students, the combination of automation and IoT devices can play a critical role in supporting day-to-day operations and reinforcing school safety. For example, Forsyth County School District recently announced plans to deploy 600 cameras with advanced analytics and facial recognition capabilities to enhance attendance tracking and increase campus security.

Automating the administrative task of manually taking attendance enables teachers and students to maximize time spent on learning. Meanwhile, automated facial recognition and video analytics can help to identify potential intruders and unauthorized visitors that may threaten school safety.

Fortifying smart cities

By 2050, 70% of the world’s population will live in urban areas, according to the Organization for Economic Cooperation and Development. As urban density increases, city planners and CIOs are looking to smart, connected technology such as digital signage, traffic cameras, stoplight timers and roadway sensors to help alleviate congestion, prevent traffic accidents, and improve overall living conditions for citizens.

This IoT technology is already driving meaningful results. In 2018, McKinsey Global Institute found that various smart city applications could reduce fatalities by 8% to 10%, reduce the average commute time by 15% to 20% and cut greenhouse gas emissions by 10% to 15%.

While smart city innovation helps the urban population in many ways, it also introduces new challenges for city CIOs. Namely, managing hundreds of thousands of disparate systems and IoT devices while preventing bad factors from infiltrating critical functions such as traffic control and water and energy management.

To enhance security and to ensure citizens don’t lose access to these critical public services, municipalities can leverage real-time network analytics so they can have a clear understanding of device usage patterns. In turn, they can use AI and machine learning technology to monitor network traffic by application and device type, detect anomalies and quickly mitigate potential security incidents.

Fueling the autonomous enterprise

As IoT technology becomes more entrenched in our everyday lives, industry-leading organizations understand that the devices are not the end game. Rather, when IoT technology, architecture, automation and human intelligence work together in harmony, IT leaders can drive operational efficiency, reduce time spent on mundane, administrative tasks and fortify network security to deliver enhanced end-user experiences. This is the autonomous enterprise vision that we’ll continue to see come to life in stores, schools and cities.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 22, 2019  9:44 AM

The benefits and pitfalls of engaging in IoT research

Mitch Maimam Profile: Mitch Maimam
Internet of Things, iot, IoT deployment, IoT design, IoT development, research and development

While there are already many connected products in the marketplace, there remain many opportunities for developing new IoT products and technologies. The greatest challenge for companies looking to develop an IoT product is how to apply a R&D process that will result in viable commercial and consumer-facing IoT products.

Today, most implementations of full stack IoT solutions involve the integration of existing technologies that are well-known. Some of these technologies have been heavily commercialized for years, and are at a cost point attractive enough for commercial and consumer applications. Other technologies are developed and available, but have not dropped down the cost/performance curve to the point where they are suitable for consumer-facing IoT solutions. With these gaps in the available technology, research into these new domains of IoT is warranted and has the potential to uncover new viable IoT solutions.

First, I will give an in-depth look into R&D.

Research

By definition, research projects are much more fundamental. The outcomes might prove a product idea to be either achievable or unachievable. Cost risks are higher as the plan for realization can be unpredictable or without a known goal. Schedules for project completion are also usually fuzzy. Hence, research projects are high risk.

Development

These are projects where achievement of the objective is possible using known technologies or simple technology extensions. This is not to minimize the cost of execution, which can be substantial. However, the risks of satisfying the project goals, costs and schedules are manageable.

Though creating a high quality, reliable IoT solutions is a significant investment and takes time to realize, these are typically development projects. When significantly different technology is required to realize the product, the project is generally in the research category.

IoT research pitfalls

IoT research projects have many risks, and our experience has shown that they can fail for a variety of reasons. I have outlined a few below:

  • The physics of the achieving the technological goal might be impossible.
  • Achieving the technology goal might be possible but at an ingredient cost point that precludes a commercially- acceptable product price point.
  • The time to achieve an outcome can be highly unpredictable. This might mean it not only takes longer to develop a technology, but also the clock might not be in the favor of the team investing in the technology. A competing technology from another source might arise. If the time window slips, the business opportunity might vanish or be significantly reduced.
  • The technology might be available, but the industrial capacity is being fully absorbed by a few major companies. This means the technology is available, but might not be available to anyone other than the largest players willing to commit to huge volumes of component parts. As an example, do you want the newest and latest display used by Apple in the newest — or soon to be released — iPhone? Great. Realize Apple might have absorbed the capacity of the supply chain for those displays. Expect shortages of supply, which translates into both cost and unpredictable delivery schedules. It happens.

When considering investment in IoT research, it’s important to narrow the focus of the research area. It is certainly possible to do fundamental research at the academic level, but the more relevant and important research needed to define which products — if ultimately successful — will fulfill a current or anticipated marketplace need.

IoT research elements for success

For a successful IoT-oriented research program, organizations must ensure that certain elements are in place. I have outlined a few below:

  • Inclusion of discovery around technology that lends itself to commercial scale manufacture, and at an attractive price point.
  • Organization of research projects with frequent and clearly defined checkpoints where the goals are reviewed, progress is assessed and the go-forward cost and schedule are assessed along with spending to date.
  • Frequent reviews of the known and anticipated competitive landscape. For example, to avoid day late and a dollar short syndrome.
  • Willingness to accept risk and to be clear sighted and open to the possibility of failure. While teams need to be tenacious and motivated for success, one has to understand that — at some point — it might be necessary to admit defeat.
  • The value proposition for success must be sufficient to warrant the significant cost, unpredictable schedule and risk of outright failure. If this cannot be clearly assessed, it might not make sense to make the investment.

Nowhere more than in the domain of research does an IoT oriented company need to keep its eyes wide open. It must not only accept risk, it also has to support and celebrate the failures for the knowledge they provide to inform future efforts. However, those successful in managing research are able to overtake competition and get a sustainable edge in being first to market.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  3:48 PM

Mapping the device flow genome

Greg Murphy Profile: Greg Murphy
Internet of Things, iot, IoT data, IoT data management, IoT device management, IoT devices, iot security

The explosion of connected devices has given rise to today’s hyper-connected enterprise, in which everyone and everything that is fundamental to the operation of an organization is connected to a network. The number of connected devices runs into the billions and is growing exponentially in both quantity and heterogeneity. This includes everything from simple IoT devices, such as IP cameras and facilities access scanners, to multi-million-dollar functional systems like CT scanners and manufacturing control systems. With the sudden surge of disparate and complex devices all tapping into various enterprise networks, it is little surprise that hyperconnectivity is becoming an incredibly complex and increasingly untenable problem for IT and security groups to address. This is especially true for device-dependent Global 2000 organizations, major healthcare systems, retail and hospitality operations or large industrial enterprises.

A complex problem like hyperconnectivity cannot be solved without first establishing a baseline of understanding. For example, in the medical community, development of targeted therapy for many serious diseases was comparatively ineffective before the mapping and sequencing of more than three billion nucleotides in the human genome. The Human Genome Project, a 15-year collaborative effort to establish this map of human DNA, has enabled the advancement of molecular medicine at a scale that was once impossible.

Similarly, IT, security and business leaders cannot address the myriad challenges of the hyper-connected enterprise without fully mapping the device flow genome of each network-connected device and system. Much like DNA mapping, mapping the device flow genome is a significant challenge, but well-worth the effort for the intelligence it provides.

The challenge of mapping a system is enormous, because it requires complete understanding of both the fixed characteristics of each device, as well as the constantly changing context in which it operates. To do this at scale, network operators must be able to apply sophisticated machine learning to accurately classify each device and baseline its dynamic behavior along with the context of the network.

If operators can do that, they can immediately identify potential mutations in the genome — devices that are not behaving the way they should — and mount an appropriate response to ensure business continuity and prevent catastrophic downstream consequences. At the time, they can leverage artificial intelligence to define and implement actionable policies that prevent future recurrences. That is’ the only reliable way to protect critical assets and deliver true closed loop security in the hyper-connected enterprise.

Mapping vs. fingerprinting

Traditionally, solutions seeking to identify and potentially classify devices on a network utilize static device fingerprinting, which can discover a device’s IP address, use Mac address lookup to identify the device manufacturer, and apply other rudimentary techniques to build a generic profile of the device. Fingerprinting answers some important but very basic questions: How many devices are connected to the network? To which ports and VLANs are they connected? How many of these devices are from Manufacturer X?

To gather more specific information, it has typically required agents to be installed on each endpoint. In the hyper-connected enterprise, that is simply not possible because the scale and heterogeneity of these devices quickly breaks traditional IT and security models. Instead, by fully mapping the device flow genome automatically — without any modifications to the device or the existing enterprise infrastructure — an operator will have identified details that lead to actionable insight.

As an example, a fingerprinting solution might — at its optimum — enable a hospital to identify the number of heart rate monitors connected to its network. Mapping the device flow genome would not only identify those heart rate monitors, but also provide the information that six of them are subject to an FDA recall, two of them are running an outdated OS that makes them incredibly vulnerable to ransomware, and three of them are communicating with an external server in the Philippines. All of which are major red flags.

This level of granularity is necessary and attainable for every device: IP cameras, HVAC control systems, access badge scanners, self-service kiosks, digital signage, infusion pumps, CT scanners, manufacturing control systems, barcode scanners, and more. Even the devices that find their way into an environment without operator knowledge, such as Amazon Echo and Apple iPad. The quantity and variety of these devices is almost unimaginable in the enterprise today, and it is’ going to grow by orders of magnitude in the near future.

Identify and take control

Once the valuable data has been garnered from mapping the device flow genome, operators will have a sophisticated level of detail on what’s connected to their networks, what each device is doing and should be doing. That information, analyzed and applied appropriately, should enable hyper-connected enterprises to take control of their vast array of devices to ensure effective protection today and over time.

AI-based systems will enable enterprises to deploy powerful policy automation to regulate the behavior of every class of device so none are able to communicate in any manner — either inside or outside of the network — that exposes them to risk and vulnerability. From there, enterprises can fully secure each class of device by implementing micro-segmentation and threat remediation policies with sophisticated and actionable AI.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  3:24 PM

IoT brings the physical and digital worlds together

Dipesh Patel Profile: Dipesh Patel
Enterprise IoT, global IoT, IoT analytics, IoT data, IoT data management, IoT in retail, iot security, smart buildings

It’s becoming increasingly clear to me that IoT is the start of something entirely new, rather than an end state in itself. The real prize is in how IoT — and a global host of connected devices — will add new context to data already being gathered through existing digital, analogue and manual means. However, taking advantage of this opportunity has not been easy. According to McKinsey, as much as 70% of IoT projects remain stuck in the proof of concept phase, rather than moving to deployment. IoT adopters need to be able to show real business value as well as how IoT solves a particular problem, and it all comes down to obtaining a complete picture of the data.

To make this happen, we need to bring the physical and digital realms into close harmony. We also need to ensure that there is clear transparency and consent when obtaining customer data. As we analyze data, we must ask ourselves whether the data comes from IoT devices, or from digital engagement. Privacy and security must be treated as first class citizens and not as an afterthought for IoT to thrive. All of this is a complex technology task, but one that is surmountable.

Transforming business outcomes through IoT data

A real-world example of the data-driven opportunity is in the retail space, where the combination of IoT-enabled physical stores and shoppers’ online buying preferences are opening new possibilities. These IoT-enabled stores are becoming more prevalent as retailers look to drive omnichannel personalized experiences, seamless checkout and tailored offerings for their shoppers. Value comes from the ability to combine previously siloed, in-store IoT real-time data with a shopper’s digital engagement, such as the store’s mobile apps and loyalty programs, to provide a more holistic experience. That experience includes delivering coupons tailored to shoppers’ buying histories, providing personalized recommendations from an in-store associate and optimizing inventory and product availability.

At the end of the day, it is about interacting with the shopper through their preferred channels, giving them unique experiences and providing tailored offers that drive loyalty, which ultimately leads to repeat business. According to an Arm Treasure Data study, nearly 50% of shoppers would consent to companies using their data if it meant getting the right rewards and incentives.

This is just one example of a real-world scenario in retail. Another example is a building owner bringing together data from HVAC, security, lighting and IoT devices to obtain a unified view of building operations to drive cost savings and enhance customer experiences. The value is replicable across industries as data silos are removed and separate data sets are brought together.

Making data security and privacy a priority

Bringing the physical and digital worlds together paints a far richer data picture. It also means there needs to be an added emphasis on security and privacy whenever data is involved, whether that is a retailer delivering more personalized customer experiences or a property manager using IoT technologies to better understand use of their commercial building space.

Security is vital as adversaries continue to get more advanced in their attack methods, and the cost of cybercrime for organizations continues to grow. Data security starts from the ground up, with IoT devices built and tested on secure frameworks. Organizations should look for IoT data management solutions that support secure management of the physical IoT device and its data throughout the lifecycle, and securely unify a broad set of enterprise digital data with IoT data.

Privacy concerns on how data is being collected, used and stored make transparency and consent critical. They must be addressed. For example, in the retail scenario described above, consent can be obtained via an opt-in through a store’s mobile app or loyalty program. The data management solution should provide tools and features to enact and manage leading privacy capabilities within applications that collect, store and utilize data.

Unlocking new possibilities with IoT data

The combination of physical IoT and digital information presents a wealth of opportunity for organizations across industries to transform their businesses. Organizations should look at IoT solutions that enable them to securely unify, store and analyze all of this data to deliver actionable insights. Ultimately, the true value of IoT will be achieved if data can be harnessed to solve real business challenges at scale, while also keeping security and privacy at the forefront.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  2:51 PM

IoT edge devices need benchmarking standards

Jason Shepherd Profile: Jason Shepherd
Edge computing, Internet of Things, iot, IoT devices, IoT edge, IoT edge computing, IoT standards

This is the third part of a four-part series. Start with the first post here.

In the first two installments of this four-part series I discussed what edge computing is and why it matters. I also outlined some key architectural and design considerations based on the increasing complexity of hardware and software as you approach the device edge from the cloud. In this installment, I’ll dig even deeper.

Infrastructure size matters

Cameras are one of the best sensors around, and computer vision — applying AI to image-based streaming data — is the first killer app for edge computing. Only someone who wants to sell you wide area connectivity thinks it’s a good idea to blindly send high-resolution video over the Internet. A smarter practice is to store video in place at your various edges and review or backhaul only when meaningful events occur.

For any given use of edge devices, a key value that a provider can give customers is pre-validation and performance benchmarks on workloads. This ensures customers purchase the right-sized infrastructure up front and get reliable performance in the field. Surveillance for safety and security is fairly constrained in terms of variables, such as the number of cameras, resolution and frame rate and footage retention time. The marketplace for camera makers and video management providers is well-established. The combined constrained variables make benchmarking appropriate infrastructure sizes relatively straightforward.

As new tools and the rise of edge computing enable scale for the use of computer vision, applications become less constrained than surveillance, and data regarding behavior and performance needs is harder to come by. For example, in a brick and mortar retail scenario, the compute power needed to identify basic customer demographics — such as gender or age range — with an AI model is different than what’s needed to assess individual identity. Retailers often don’t have power or cooling available in their equipment closets, so they must get creative. It would be valuable for them to know in advance what the loading requirements will be.

Users’ needs are likely to grow over time with the consolidation of more workloads on the same infrastructure. It’s important to deploy additional compute capacity in the field and invest in the right modular, software-defined approach up front, so you can readily redistribute workloads anywhere as your needs evolve.

Fragmented edge technology makes benchmarking trickier

In more traditional telemetry and event-based IoT, measuring efficacy and developing benchmarks is especially tough due to the inherent fragmentation near the device edge. Basically every deployment tends to be a special case. With so many different uses and tool sets, there are no established benchmark baselines.

I often draw edge to cloud architecture outlines left to right on a page because it fits better on slides with a landscape layout. Someone a few years back pointed out to me during a presentation when I was talking about many of these concepts that the cloud on the right is like the East with the longest legacy of refinement and stability, whereas the edge on the left is the Wild West. This pretty much nails it, and this is why it’s so important for us to collaborate on open tools like EdgeX Foundry that facilitate interoperability and order in an inherently fragmented edge solution stack. It takes a village to deploy an IoT solution that spans technology and domain expertise.

In addition to facilitating open interoperability, tools like the EdgeX Foundry framework provide bare-minimum plumbing to not only serve as a center of gravity for assembling predictable solutions, but also facilitate stronger performance benchmarks regardless of use.

Tools should fit a standard for IoT edge interoperability, so IT pros can focus on value instead of reinvention. An IoT standard would also create benchmarking for infrastructure sizes, so customers can better anticipate their needs over time.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  5:14 PM

Protect IoT bare dies and wire bonds for high reliability

Zulki Khan Profile: Zulki Khan
Internet of Things, iot, IoT devices, IoT hardware, IoT PCB, PCB design, Printed circuit boards

As IoT devices gain greater acceptance within mission critical industries such as industrial, medical and aerospace, product integrity and reliability are of the highest order. Bare chip or die protection is at the top of the list for ensuring IoT reliability and product integrity.

IoT devices — in most instances — are based on factors requiring a combination of conventional surface mount printed circuit boards (PCB) and microelectronics manufacturing, which creates hybrid manufacturing. IoT device PCB microelectronics manufacturing most often requires dies to be placed on a PCB, which can range from rigid, flex or a combination of rigid-flex circuits. In some cases, dies can also be placed on a substrate.

Why protecting die and wire bonding is important

Protecting a bare die and its associated wire bonds is critical to assure mechanical sturdiness and avoid moisture, thus maintaining a high degree of reliability for the IoT user. PCB microelectronics assembly requires a very delicate, fine wire that is generally made with gold. The typical wire gauge ranges from one, two, three and five mil. Five mil wire is typically used for very high current applications. More often than not, one mil wire gauge is used, but in some cases, sub mil — 7/10 of a mil — is also used.

It’s highly advantageous for IoT device OEMs to get a good handle on how best to protect bare dies and their associated wire bonding. That way, OEMs can assure themselves of high levels of product reliability.

Methods to protect die and wire bonding

There are two distinct sealing compound methodologies for protecting the die and wire bonding. One is called by the unusual name of glob top, which actually fits very well since a glob of epoxy is placed on top of the die to protect it.

Dam and fill is a similar die sealer, which is in this same glob top category. It involves creating a dam or wall around the die and associated wire bonding by using a high viscosity material. Then, the middle or cavity surrounded by the dam is filled with a low viscosity epoxy. Thus, the high and low viscosity materials act as an effective protector of the die and wire bonding.

Lid and cover is the second encapsulating protected method. It can be a ceramic, plastic or glass lid depending on customer specifications and application. Such a lid can be soldered onto the substrate if the material is aluminum, nickel, gold or hot air solder leveling.

In some cases, a specialized lid with B-staged epoxy is provided. Most likely, it is custom made with epoxy already applied on the lid or cover. All that is needed in this case is to cure it and then apply it around the die and wire bonds. While the lid and cover protection method isn’t as widespread as glob top, the lid and cover approach is used depending on specialized PCB applications.

Reliability not only depends on the right bare die sealing compound for the right IoT PCB application, but also the level of microelectronics manufacturing experience. PCB microelectronics manufacturing personnel must have a good understanding of these protection methodologies and how to accurately apply them to form a perfect microelectronics assembly.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  4:12 PM

Lightweight Machine-to-Machine technology emerges in IoT networks

William Yan Profile: William Yan
Cat-M, Internet of Things, iot, IOT Network, IoT protocols, IoT wireless, M2M, Narrowband IoT

Last year’s report published by Gartner Research cited that connected things in use would hit 14.2 billion in 2019 with exponential growth in the years thereafter. IoT is garnering lots of attention, and a lot organizations are considering and designing many IoT services and technologies. That being said, one of the key IoT-focused emerging technologies is Lightweight Machine-to-Machine (LwM2M) protocol, which is a device communication and management protocol specifically designed for IoT services.

What is LwM2M?

The standard protocol is published and maintained by the Open Mobile Alliance (OMA) organization. It was first released in February 2017 and initially designed for constrained devices with radio uplink. As it stands now, LwM2M is a rather mature protocol and has been around for more than five years. Within those five years, it has gone through four versions of specifications and has been tested in eight test fests organized by OMA. Compared to other IoT device management specifications, one can say that LwM2M is starting to gain wide market recognition.

Lightweight M2M components

The standard Lightweight M2M components and its technology stack. Source: AVSystem.

Lightweight M2M is often compared to Message Queuing Telemetry Transport (MQTT), another IoT protocol that is arguably the most popular device communication protocol in IoT services. MQTT is maintained by the International Organization for Standardization organization and is a publish-subscribe  messaging protocol. As such, it requires a message broker for data communication.

The protocol comes with a well-defined data model representing specific service profiles, such as connectivity monitoring, temperature reading and firmware updates. Thanks to its well-defined data model structure, the standard enables common, generic, vendor-neutral and implementation-agnostic features, such as secure device bootstrapping, client registration, object and resource access, and device reporting. These mechanisms greatly reduce technology fragmentation and decrease potential interoperability errors.

What are the major advantages of LwM2M?

LwM2M is gaining recognition and starting to be adopted for facilitating IoT deployments due to its specific benefits. These include the following:

  • Ultra low link utilization through LwM2M is a lightweight protocol guaranteeing low data usage.
  • Working over links with a small data frame and high latency, as applicable to most IoT use cases.
  • Greater power efficiency through Datagram Transport Layer Security (DTLS) resumption and Queue Mode, which reduces energy usage and make the protocol suitable for devices in power saving mode and extended Discontinuous Reception modes.
  • Support for both IP and non-IP data delivery transport which minimizes energy consumption.
  • Optimized performance in cellular-based IoT networks such as Narrowband-IoT and Long Term Evolution Cat-M.
  • Support for low-power wide area network binding.

LwM2M also meets the needs of enterprises that have to balance multiple factors — such as battery life, data rate, bandwidth, latency and costs — impacting their IoT services.

Who can benefit from the LwM2M protocol?

Lightweight M2M is becoming important for enterprises and service providers alike because of its successful use  in IP and non-IP transports. It provides device management and service enablement capabilities for managing the entire lifecycle of an IoT device. The protocol also introduces more efficient data formats, optimized message exchanges and support for application layer security based on Internet Engineering Task Force (IETF) Object Security for Constrained RESTful Environments (OSCORE).

What does the future hold?

As a technology, Lightweight M2M is continually evolving. There’s an active OMA group that is constantly working on advancing the technology. The next specification release expected is a 1.2 version, which will provide support for many new things in a number of areas, such as supporting MQTT and HTTP; using IETF specification for end-to-end secured firmware updates; introduction of a dedicated gateway enabling group communication; and optimization efforts, such as registration templates and DTLS/TLS 1.3 support.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 14, 2019  2:34 PM

New-age intelligence systems for oil and gas operations

Abhishek Tandon Profile: Abhishek Tandon
Internet of Things, iot, IoT analytics, IoT data, IoT sensors, Machine learning

The oil and gas industry has been going through a tumultuous time of late. With volatile crude oil prices and geopolitical trends putting pressure on supply, it is becoming imperative for oil and gas companies to manage costs through operational effectiveness and minimize any production hurdles due to unplanned downtimes and unforeseen breakdowns.

Before making production decisions, organizations must understand the complex beast that is upstream operations with data points to analyze, including seismic and geological data to understand the ground conditions; oil quality data to determine gas oil ratio, water cut and submergibility; and pump calibration to ensure that it is optimized for the given conditions. Too much pressure on the pump and it is likely to break, too little pressure and it is being underutilized.

Technology is likely to be a top disruptor in the future of oil and gas operations for this very reason. IoT sensor data analytics and machine learning will enhance the machine interface and improve the effectiveness of brown-field setups. But what really comprises of a true intelligence system that is likely to disrupt this highly complex industry?

The new avatar of data analysis

There has never been a dearth of data usage in oil and gas operations. Even before data science became cool, there was a tremendous amount of statistical research that was being utilized to understand seismic and geological data and manage oil field operations efficiently. Data has always been the backbone of decision making in the oil and gas sector.

With the advent of data technologies that can handle scaling and machine learning to help operations teams and scientists make sense of the data, new-age intelligence systems are also starting to become top priorities in the long list of digital transformation initiatives.

Extracting the unknown unknowns

There are a number of prebuilt models that are used to determine the oil quality and calibrate well equipment. By feeding information into these models, field engineers have a good idea of the way the well is operating.

Machine learning starts to surgace the unknown unknowns. Machine learning makes the existing setup more sophisticated by analyzing multivariate patterns and anomalies that can be attributed to past failures. Moreover, the analysis patterns are derived from several years of data to reduce any inherent bias. Machine learning alone cannot answer all analysis questions. It is one piece of the puzzle and enhances existing knowledge acquired through years of research.

Constituents of a new-age intelligence system

The speed at which organizations receive data and conduct analysis is of the utmost importance. Hence, a sophisticated decision system needs to deliver insights quickly and with tremendous accuracy. A disruption in an oil well can cause a revenue loss as high as $1 million per day.

A true decision support system should have IoT infrastructure, real-time monitoring systems, supervised learning models and unsupervised learning models. IoT infrastructure includes low power sensors, gateways and communication setups to ensure that all aspects of well operations are connected and providing information in near real time. Real-time monitoring systems allow constant monitoring of the assets driven by key performance indicators and look for any issues or spikes that can be caught by the naked eye. Typical scenarios that real-time monitoring systems would cover include existing oil production, temperature and pressure of the well pumps and seismic activity around the well site.

Supervised learning models predict for known patterns and issues. These rely on past information of failures and models that have been honed over time in experimental and production setups. Organizations can use models for predictive maintenance of the pumps and pump optimization for higher productivity. Unsupervised learning models look for anomalies and possible signs of degradation. They utilize complex multivariate pattern data to determine correlations and possible deviations from normal behavior. Unsupervised models determine multivariate correlations between productivity and operational parameters using neural networks and identify early signs of pump degradation using time series analysis and anomaly detection to reduce the probability of a pump breakdown.

Components of an intelligence system. Source: Abhishek Tandon

It is difficult to rely on one type of system. Constant improvements require a combination of human intelligence and machine intelligence. Due to the plethora of prior knowledge available to run oil wells effectively, machine learning and big data technologies provide the right arsenal for these systems to become even more sophisticated. A new-age intelligence system becomes a combination of known knowledge through existing models and unknown patterns derived from machine learning.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 14, 2019  10:29 AM

Use network telemetry for improved IoT analytics

Jonah Kowall Profile: Jonah Kowall
Internet of Things, iot, IoT analytics, IoT data, IoT data management, IOT Network

Today’s solution for IoT analytics has primarily been through application instrumentation. This means the developer of the application inserts code, which sends back telemetry to some kind of monitoring or analytics platform. These solutions are most often SaaS or live in the public cloud. These are great methods when you have control over the code and have the knowledge of what and how to instrument. Oftentimes, people don’t have this prior knowledge. Another approach has been the application of packet capture technologies to IoT. However, due to the fact that so many IoT solutions leverage content delivery networks and public cloud, that approach doesn’t work particularly well due to large visibility gaps.

Some forward-thinking organizations have begun to use traffic data such as NetFlow, sFlow and IP Flow Information Export (IPFIX) to send back IoT information within a network flow. This has several advantages when used to capture IoT specific data. First, the data is standardized into industry-accepted formats, which I will get into later. The second is that once the data is captured from the gateway, it can be correlated with traffic data coming from the data center or cloud services in use. Today’s public cloud environments all have the ability to generate and export flow data, including the four examples listed below, which have been sorted by popularity.

  1. Amazon provides the Virtual Private Cloud (VPC)  Flow Log service. The service exports network traffic summaries — such as traffic levels, ports, network communication and other network-specific data — across AWS services on user-defined VPCs to understand how components communicate. The data is published to CloudWatch logs in JavaScript Object Notation (JSON) on a Simple Storage Service bucket or can be fed to other services such as Kinesis. The data contains basic network data about the communication flow and is published every 10 to 15 minutes. Unfortunately, Amazon’s service is a bit behind the other major cloud providers.
  2. Microsoft Azure provides the Network Security Group Flow Logs. This service similarly publishes the logs in a JSON format to Azure storage. The one difference — which improves upon Amazon’s implementation — is that Microsoft publishes the data in real-time, making it more useful operationally.
  3. Finally, Google is ahead of the pack on this data source. Google has created the VPC Flow Log service, which can be consumed by Stackdriver logging. Google does everything the others do, but most importantly, they also embed latency and performance data within the exported logs. The data is highly granular which makes it more useful, but it generates a lot of volume.

Tools for network-flow export

As you can see, there are many implementations. All of them provide a rich set of summarized data sets that are very useful for understanding how services interact, which services are most used and which applications consume network resources or answer requests. This data is valuable for countless operational and security use cases.

If you are implementing on a smaller device and want to collect data from the gateway or IoT  itself, there are lightweight network flow-export tools that can provide a lot of additional context on network traffic generated by the hardware. These agents can sit on Windows or Linux systems. Many of them will run on embedded Linux devices as well. Here are some options:

  1. nProbe has been around for a long time, and hence is very mature and heavily used. The company behind it has been tuning and expanding capabilities for over a decade. While nProbe was once free, it now costs money, but it has the ability to classify over 250 types of applications using deep packet inspection. These application types and latency information are embedded in the exported flow, which adds additional value to the flow. The solution can operate in both packet capture  mode and PF_RING mode to reduce the overhead on the operating system.
  2. kProbe is a Kentik product to do what nProbe does, which is to convert packet data from the network card to NetFlow or kFlow. While it doesn’t have as many application decodes, it’s free to use and highly efficient.
  3. SoftFlowd is a great open-source project, but it hasn’t had too many updates recently. Similar to the other solutions above, this small open-source agent converts packet data to flow data. The product has been tuned over many years and is highly efficient. It lacks a lot of application classification, but it does do some.
  4. NDSAD is a host-based agent that captures traffic from the interfaces and exports to NetFlow v5. It also supports more advanced capture methods for lower latency capture from the network card. This project doesn’t execute application classification, so the exported flow is less rich when it comes out as NetFlow.

Analyze flow data with these tools

Once these products are in place, there are many tools to analyze the output from them. Unlike tracing tools on the software side — which lock you into a specific implementation due to protocol differences in the network data sources — the data is standardized. This is the case in NetFlow, Simple Network Management Protocol (SNMP) and streaming telemetry, though it does contain fewer standards compared to the others.

While each vendor that makes network devices has its own analytics and management platform, they don’t support other vendors. Most environments are highly variable with many vendors and open-source components deployed. Each of the devices have different formats for NetFlow, but this is handled by flexible NetFlow templates and IPFIX. SNMP is handled via management information bases. Streaming telemetry is a new data type, but it lacks data taxonomy standards, which is a step back from SNMP. Tools that ingest any of this network data will normalize the data so the user doesn’t need to do that work. That means if you are using specific vendor implementations, you can avoid lock-in when you are using these data sources, particularly as the access will be standard once it’s in network-based analytics tools not made by vendors.

Aside from the vendor tools, there are more popular third-party tools, such as Kentik, and other open-source options. Most of them can handle NetFlow and other network data, but few  can handle the cloud-based flow log data too. In IoT, the scale is an important consideration, which causes problems with many of the older tools built on traditional databases. Common commercial tools to analyze flow data include those built by Solarwinds, ManageEngine, Plixer, Paessler and Kentik. I will highlight a few open-source analytics products, which are still actively maintained within the last five years.

  1. ntopng was designed by the same folks who made nProbe and Ntop. This product can take data from flow or packet data and does similar visualizations in a nice web-based user interface. This tool has been around for a long time and works great. However, it isn’t meant as a scalable analytics platform beyond a small number of low-volume hosts. It’s still a useful tool for those managing networks. It’s also suitable for those looking to gather data about what’s happening on the network and which devices are speaking to one another.
  2. Cflowd is a project by the Center for Applied Internet Data Analysis, which is a non-profit focused on gathering and analyzing data on the internet. This project is a good foundation for building a DIY analytics solution and is still maintained.
  3. sflowtool captures sFlow coming from various sources and can output text or binary data, which can be used to feed data into another tool. It can also convert the incoming data to NetFlow v5, which can be forwarded elsewhere. sFlow is a great data source, but not the most common. It contains data that Juniper generates from many of their devices.

As you can see, many of these analytics tools are not full-featured. More often than not, if an organization wants a free or open-source analytics solution, they end up using Elasticsearch, Logstash, and Kibana or Elastic Stack, which ends up having scalability issues when dealing with network data. This trend will progress quickly as the cloud creates unique requirements and constraints for organizations moving in that direction. We should see a lot more IoT projects using network data, as it’s a highly flexible and well-understood data source.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 13, 2019  1:25 PM

Disentangle IoT’s legal and ethical considerations

Helena Lisachuk Profile: Helena Lisachuk
Data governance, Internet of Things, iot, IoT compliance, IoT cybersecurity, iot privacy, IoT regulations, iot security

If you’ve ever watched a toddler eat spaghetti, you know just how messy it can get. The pasta tangle resists every effort, and sauce gets everywhere. Hands get involved. But as the child grows, they learn how to use tools more effectively. Within a few years, they can use a fork to tame the tangle and make quick, neat work of a meal.

I think this is good analogy for companies new to IoT solutions: they can find a tangle of compliance considerations they may not have expected. These might include legal and regulatory requirements, as well as ethical considerations around the use of IoT that may not be legally required, but good practice nonetheless. Companies with a global footprint have even more spaghetti on their plates, as they contend with each host country’s unique ruleset. Why is this?

The compelling strength of IoT lies in its ability to apply the power of the digital world to almost any problem in the physical world. This crossover means IoT touches rules made for each. An IoT-enabled insulin pump, for example, doesn’t just need to meet safety standards for a medical device; it also has to meet the privacy and cybersecurity standards of a digital tool, as well as respect and obey intellectual property laws. Then there are ethical considerations. Can you ensure that end users have truly informed consent as to how the device operates?

So how can organizations deploy IoT to achieve its benefits, while modeling responsible corporate citizenship at the same time? Just like fork for spaghetti, the answer is the same: use the right tool. In this case, the tool is design thinking. Consider framing current and upcoming laws and regulations as design constraints, then craft IoT solutions accordingly. With growing public awareness of ethical and privacy issues in the digital realm, organizations can’t afford for IoT design to be an afterthought. The first step? Get a clear understanding on what’s on the compliance plate.

The different strands of regulation

Generally speaking and not surprisingly, the wider the scope of an IoT solution, the greater the number of compliance considerations it’s likely to encounter. These commonly include:

  • Privacy and security. Since IoT’s sweet spot is collecting and analyzing massive volumes of data, perhaps the largest area of regulation is how to protect that data. There are multiple data privacy and security laws in multiple nations, each with a different impact on IoT solution design. Adding to the complexity: these laws can vary by industry – such as healthcare or energy — and requirements can vary widely even within a given region. For instance, while many know that the European Union’s (EU) General Data Protection Regulation (GDPR) regulates how many forms of data are collected and stored across the EU, some may not realize that GDPR isn’t necessarily uniform across the EU. Some aspects of those rules are left up to individual member states to define and implement.
  • Technical regulations. Technical IoT regulation can start at a level as granular as component technologies. While companies may not need to design the sensors or communication protocols they use, they should be aware of the regulations that govern them. For example, communication protocols using unlicensed spectrum may be difficult to use in certain areas, such as airports.
  • Intellectual property, export and trade compliance rules. IoT solutions that span national borders can raise difficult questions ranging from who owns intellectual property to how to comply with tariffs. In fact, moving certain types of data and information across borders can trigger laws on the export of controlled technology.
  • Workplace and labor. Legal and ethical concerns don’t just apply to customer-facing technologies. There are just as many regulatory issues for purely internal IoT applications. Solutions to improve workplace efficiency can touch regulations for gathering employee data, and how that data can — or can’t — be used in employment decisions or performance reviews.

Finding the right tool to untangle

When laid out in such a list, IoT’s potential legal and ethical considerations can seem daunting. The key to not being overwhelmed is to not ignore them. Start your assessment of legal and ethical considerations early in the design of an IoT solution. That way you can tailor the solution to the desired outcomes and you will not find yourself forced into costly changes during implementation.

Tools and expert advice at this early stage can also help understand what regulations impact your potential IoT use case. For example, Deloitte Netherlands has created a tool that can sift through the list of EUand national regulations and pull out those that are applicable to a given IoT solution. Such a list of applicable regulations can help to make clear the specific requirements that an IoT solution must meet, helping to tailor the hardware, software and governance decisions to suit.

Ethical IoT as a differentiator

Legal and regulatory compliance can often seem like a costly and tiresome burden, but breaches or the misuse of data can have real and staggering cost — both in dollars, and damage to reputation.

This fact is prompting some companies to take a different approach to IoT. Rather than viewing legal and ethical compliance as a burden, they’re looking to make ethics a competitive differentiator. Much like organic products have become a differentiator for some food brands, so too can a transparent and ethical approach to IoT be a differentiator, allowing customers to have confidence in a brand as a steward of their information collected via IoT.

Ethics can often seem like a scary prospect to companies. Get it wrong and you end up on in the news. But ethics really is about what people value, and that can be an incredibly powerful tool for companies. After all, if you understand what people value, you can deliver that value to them more easily. Understanding legal and ethical considerations of IoT is not just a compliance check, it is core requirement to doing IoT right.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: