IoT Agenda


August 21, 2019  3:48 PM

Mapping the device flow genome

Greg Murphy Profile: Greg Murphy
Internet of Things, iot, IoT data, IoT data management, IoT device management, IoT devices, iot security

The explosion of connected devices has given rise to today’s hyper-connected enterprise, in which everyone and everything that is fundamental to the operation of an organization is connected to a network. The number of connected devices runs into the billions and is growing exponentially in both quantity and heterogeneity. This includes everything from simple IoT devices, such as IP cameras and facilities access scanners, to multi-million-dollar functional systems like CT scanners and manufacturing control systems. With the sudden surge of disparate and complex devices all tapping into various enterprise networks, it is little surprise that hyperconnectivity is becoming an incredibly complex and increasingly untenable problem for IT and security groups to address. This is especially true for device-dependent Global 2000 organizations, major healthcare systems, retail and hospitality operations or large industrial enterprises.

A complex problem like hyperconnectivity cannot be solved without first establishing a baseline of understanding. For example, in the medical community, development of targeted therapy for many serious diseases was comparatively ineffective before the mapping and sequencing of more than three billion nucleotides in the human genome. The Human Genome Project, a 15-year collaborative effort to establish this map of human DNA, has enabled the advancement of molecular medicine at a scale that was once impossible.

Similarly, IT, security and business leaders cannot address the myriad challenges of the hyper-connected enterprise without fully mapping the device flow genome of each network-connected device and system. Much like DNA mapping, mapping the device flow genome is a significant challenge, but well-worth the effort for the intelligence it provides.

The challenge of mapping a system is enormous, because it requires complete understanding of both the fixed characteristics of each device, as well as the constantly changing context in which it operates. To do this at scale, network operators must be able to apply sophisticated machine learning to accurately classify each device and baseline its dynamic behavior along with the context of the network.

If operators can do that, they can immediately identify potential mutations in the genome — devices that are not behaving the way they should — and mount an appropriate response to ensure business continuity and prevent catastrophic downstream consequences. At the time, they can leverage artificial intelligence to define and implement actionable policies that prevent future recurrences. That is’ the only reliable way to protect critical assets and deliver true closed loop security in the hyper-connected enterprise.

Mapping vs. fingerprinting

Traditionally, solutions seeking to identify and potentially classify devices on a network utilize static device fingerprinting, which can discover a device’s IP address, use Mac address lookup to identify the device manufacturer, and apply other rudimentary techniques to build a generic profile of the device. Fingerprinting answers some important but very basic questions: How many devices are connected to the network? To which ports and VLANs are they connected? How many of these devices are from Manufacturer X?

To gather more specific information, it has typically required agents to be installed on each endpoint. In the hyper-connected enterprise, that is simply not possible because the scale and heterogeneity of these devices quickly breaks traditional IT and security models. Instead, by fully mapping the device flow genome automatically — without any modifications to the device or the existing enterprise infrastructure — an operator will have identified details that lead to actionable insight.

As an example, a fingerprinting solution might — at its optimum — enable a hospital to identify the number of heart rate monitors connected to its network. Mapping the device flow genome would not only identify those heart rate monitors, but also provide the information that six of them are subject to an FDA recall, two of them are running an outdated OS that makes them incredibly vulnerable to ransomware, and three of them are communicating with an external server in the Philippines. All of which are major red flags.

This level of granularity is necessary and attainable for every device: IP cameras, HVAC control systems, access badge scanners, self-service kiosks, digital signage, infusion pumps, CT scanners, manufacturing control systems, barcode scanners, and more. Even the devices that find their way into an environment without operator knowledge, such as Amazon Echo and Apple iPad. The quantity and variety of these devices is almost unimaginable in the enterprise today, and it is’ going to grow by orders of magnitude in the near future.

Identify and take control

Once the valuable data has been garnered from mapping the device flow genome, operators will have a sophisticated level of detail on what’s connected to their networks, what each device is doing and should be doing. That information, analyzed and applied appropriately, should enable hyper-connected enterprises to take control of their vast array of devices to ensure effective protection today and over time.

AI-based systems will enable enterprises to deploy powerful policy automation to regulate the behavior of every class of device so none are able to communicate in any manner — either inside or outside of the network — that exposes them to risk and vulnerability. From there, enterprises can fully secure each class of device by implementing micro-segmentation and threat remediation policies with sophisticated and actionable AI.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 21, 2019  3:24 PM

IoT brings the physical and digital worlds together

Dipesh Patel Profile: Dipesh Patel
Enterprise IoT, global IoT, IoT analytics, IoT data, IoT data management, IoT in retail, iot security, smart buildings

It’s becoming increasingly clear to me that IoT is the start of something entirely new, rather than an end state in itself. The real prize is in how IoT — and a global host of connected devices — will add new context to data already being gathered through existing digital, analogue and manual means. However, taking advantage of this opportunity has not been easy. According to McKinsey, as much as 70% of IoT projects remain stuck in the proof of concept phase, rather than moving to deployment. IoT adopters need to be able to show real business value as well as how IoT solves a particular problem, and it all comes down to obtaining a complete picture of the data.

To make this happen, we need to bring the physical and digital realms into close harmony. We also need to ensure that there is clear transparency and consent when obtaining customer data. As we analyze data, we must ask ourselves whether the data comes from IoT devices, or from digital engagement. Privacy and security must be treated as first class citizens and not as an afterthought for IoT to thrive. All of this is a complex technology task, but one that is surmountable.

Transforming business outcomes through IoT data

A real-world example of the data-driven opportunity is in the retail space, where the combination of IoT-enabled physical stores and shoppers’ online buying preferences are opening new possibilities. These IoT-enabled stores are becoming more prevalent as retailers look to drive omnichannel personalized experiences, seamless checkout and tailored offerings for their shoppers. Value comes from the ability to combine previously siloed, in-store IoT real-time data with a shopper’s digital engagement, such as the store’s mobile apps and loyalty programs, to provide a more holistic experience. That experience includes delivering coupons tailored to shoppers’ buying histories, providing personalized recommendations from an in-store associate and optimizing inventory and product availability.

At the end of the day, it is about interacting with the shopper through their preferred channels, giving them unique experiences and providing tailored offers that drive loyalty, which ultimately leads to repeat business. According to an Arm Treasure Data study, nearly 50% of shoppers would consent to companies using their data if it meant getting the right rewards and incentives.

This is just one example of a real-world scenario in retail. Another example is a building owner bringing together data from HVAC, security, lighting and IoT devices to obtain a unified view of building operations to drive cost savings and enhance customer experiences. The value is replicable across industries as data silos are removed and separate data sets are brought together.

Making data security and privacy a priority

Bringing the physical and digital worlds together paints a far richer data picture. It also means there needs to be an added emphasis on security and privacy whenever data is involved, whether that is a retailer delivering more personalized customer experiences or a property manager using IoT technologies to better understand use of their commercial building space.

Security is vital as adversaries continue to get more advanced in their attack methods, and the cost of cybercrime for organizations continues to grow. Data security starts from the ground up, with IoT devices built and tested on secure frameworks. Organizations should look for IoT data management solutions that support secure management of the physical IoT device and its data throughout the lifecycle, and securely unify a broad set of enterprise digital data with IoT data.

Privacy concerns on how data is being collected, used and stored make transparency and consent critical. They must be addressed. For example, in the retail scenario described above, consent can be obtained via an opt-in through a store’s mobile app or loyalty program. The data management solution should provide tools and features to enact and manage leading privacy capabilities within applications that collect, store and utilize data.

Unlocking new possibilities with IoT data

The combination of physical IoT and digital information presents a wealth of opportunity for organizations across industries to transform their businesses. Organizations should look at IoT solutions that enable them to securely unify, store and analyze all of this data to deliver actionable insights. Ultimately, the true value of IoT will be achieved if data can be harnessed to solve real business challenges at scale, while also keeping security and privacy at the forefront.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  2:51 PM

IoT edge devices need benchmarking standards

Jason Shepherd Profile: Jason Shepherd
Edge computing, Internet of Things, iot, IoT devices, IoT edge, IoT edge computing, IoT standards

This is the third part of a four-part series. Start with the first post here.

In the first two installments of this four-part series I discussed what edge computing is and why it matters. I also outlined some key architectural and design considerations based on the increasing complexity of hardware and software as you approach the device edge from the cloud. In this installment, I’ll dig even deeper.

Infrastructure size matters

Cameras are one of the best sensors around, and computer vision — applying AI to image-based streaming data — is the first killer app for edge computing. Only someone who wants to sell you wide area connectivity thinks it’s a good idea to blindly send high-resolution video over the Internet. A smarter practice is to store video in place at your various edges and review or backhaul only when meaningful events occur.

For any given use of edge devices, a key value that a provider can give customers is pre-validation and performance benchmarks on workloads. This ensures customers purchase the right-sized infrastructure up front and get reliable performance in the field. Surveillance for safety and security is fairly constrained in terms of variables, such as the number of cameras, resolution and frame rate and footage retention time. The marketplace for camera makers and video management providers is well-established. The combined constrained variables make benchmarking appropriate infrastructure sizes relatively straightforward.

As new tools and the rise of edge computing enable scale for the use of computer vision, applications become less constrained than surveillance, and data regarding behavior and performance needs is harder to come by. For example, in a brick and mortar retail scenario, the compute power needed to identify basic customer demographics — such as gender or age range — with an AI model is different than what’s needed to assess individual identity. Retailers often don’t have power or cooling available in their equipment closets, so they must get creative. It would be valuable for them to know in advance what the loading requirements will be.

Users’ needs are likely to grow over time with the consolidation of more workloads on the same infrastructure. It’s important to deploy additional compute capacity in the field and invest in the right modular, software-defined approach up front, so you can readily redistribute workloads anywhere as your needs evolve.

Fragmented edge technology makes benchmarking trickier

In more traditional telemetry and event-based IoT, measuring efficacy and developing benchmarks is especially tough due to the inherent fragmentation near the device edge. Basically every deployment tends to be a special case. With so many different uses and tool sets, there are no established benchmark baselines.

I often draw edge to cloud architecture outlines left to right on a page because it fits better on slides with a landscape layout. Someone a few years back pointed out to me during a presentation when I was talking about many of these concepts that the cloud on the right is like the East with the longest legacy of refinement and stability, whereas the edge on the left is the Wild West. This pretty much nails it, and this is why it’s so important for us to collaborate on open tools like EdgeX Foundry that facilitate interoperability and order in an inherently fragmented edge solution stack. It takes a village to deploy an IoT solution that spans technology and domain expertise.

In addition to facilitating open interoperability, tools like the EdgeX Foundry framework provide bare-minimum plumbing to not only serve as a center of gravity for assembling predictable solutions, but also facilitate stronger performance benchmarks regardless of use.

Tools should fit a standard for IoT edge interoperability, so IT pros can focus on value instead of reinvention. An IoT standard would also create benchmarking for infrastructure sizes, so customers can better anticipate their needs over time.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  5:14 PM

Protect IoT bare dies and wire bonds for high reliability

Zulki Khan Profile: Zulki Khan
Internet of Things, iot, IoT devices, IoT hardware, IoT PCB, PCB design, Printed circuit boards

As IoT devices gain greater acceptance within mission critical industries such as industrial, medical and aerospace, product integrity and reliability are of the highest order. Bare chip or die protection is at the top of the list for ensuring IoT reliability and product integrity.

IoT devices — in most instances — are based on factors requiring a combination of conventional surface mount printed circuit boards (PCB) and microelectronics manufacturing, which creates hybrid manufacturing. IoT device PCB microelectronics manufacturing most often requires dies to be placed on a PCB, which can range from rigid, flex or a combination of rigid-flex circuits. In some cases, dies can also be placed on a substrate.

Why protecting die and wire bonding is important

Protecting a bare die and its associated wire bonds is critical to assure mechanical sturdiness and avoid moisture, thus maintaining a high degree of reliability for the IoT user. PCB microelectronics assembly requires a very delicate, fine wire that is generally made with gold. The typical wire gauge ranges from one, two, three and five mil. Five mil wire is typically used for very high current applications. More often than not, one mil wire gauge is used, but in some cases, sub mil — 7/10 of a mil — is also used.

It’s highly advantageous for IoT device OEMs to get a good handle on how best to protect bare dies and their associated wire bonding. That way, OEMs can assure themselves of high levels of product reliability.

Methods to protect die and wire bonding

There are two distinct sealing compound methodologies for protecting the die and wire bonding. One is called by the unusual name of glob top, which actually fits very well since a glob of epoxy is placed on top of the die to protect it.

Dam and fill is a similar die sealer, which is in this same glob top category. It involves creating a dam or wall around the die and associated wire bonding by using a high viscosity material. Then, the middle or cavity surrounded by the dam is filled with a low viscosity epoxy. Thus, the high and low viscosity materials act as an effective protector of the die and wire bonding.

Lid and cover is the second encapsulating protected method. It can be a ceramic, plastic or glass lid depending on customer specifications and application. Such a lid can be soldered onto the substrate if the material is aluminum, nickel, gold or hot air solder leveling.

In some cases, a specialized lid with B-staged epoxy is provided. Most likely, it is custom made with epoxy already applied on the lid or cover. All that is needed in this case is to cure it and then apply it around the die and wire bonds. While the lid and cover protection method isn’t as widespread as glob top, the lid and cover approach is used depending on specialized PCB applications.

Reliability not only depends on the right bare die sealing compound for the right IoT PCB application, but also the level of microelectronics manufacturing experience. PCB microelectronics manufacturing personnel must have a good understanding of these protection methodologies and how to accurately apply them to form a perfect microelectronics assembly.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  4:12 PM

Lightweight Machine-to-Machine technology emerges in IoT networks

William Yan Profile: William Yan
Cat-M, Internet of Things, iot, IOT Network, IoT protocols, IoT wireless, M2M, Narrowband IoT

Last year’s report published by Gartner Research cited that connected things in use would hit 14.2 billion in 2019 with exponential growth in the years thereafter. IoT is garnering lots of attention, and a lot organizations are considering and designing many IoT services and technologies. That being said, one of the key IoT-focused emerging technologies is Lightweight Machine-to-Machine (LwM2M) protocol, which is a device communication and management protocol specifically designed for IoT services.

What is LwM2M?

The standard LwM2M is published and maintained by the Open Mobile Alliance (OMA) organization. It was first released in February 2017 and initially designed for constrained devices with radio uplink. As it stands now, LwM2M is a rather mature protocol and has been around for more than five years. Within those five years, LwM2M has gone through four versions of specifications and has been tested in eight test fests organized by OMA. Compared to other IoT device management specifications, one can say that LwM2M is starting to gain wide market recognition.

LwM2M is often compared to Message Queuing Telemetry Transport (MQTT), another IoT protocol that is arguably the most popular device communication protocol in IoT services. MQTT is maintained by the International Organization for Standardization organization and is a publish-subscribe  messaging protocol. As such, it requires a message broker for data communication.

LwM2M comes with a well-defined data model representing specific service profiles, such as connectivity monitoring, temperature reading and firmware updates. Thanks to its well-defined data model structure, the standard enables common, generic, vendor-neutral and implementation-agnostic features, such as secure device bootstrapping, client registration, object and resource access, and device reporting. These mechanisms greatly reduce technology fragmentation and decrease potential interoperability errors.

What are the major advantages of LwM2M?

LwM2M is gaining recognition and starting to be adopted for facilitating IoT deployments due to its specific benefits. These include the following:

  • Ultra low link utilization through LwM2M is a lightweight protocol guaranteeing low data usage.
  • Working over links with a small data frame and high latency, as applicable to most IoT use cases.
  • Greater power efficiency through Datagram Transport Layer Security (DTLS) resumption and Queue Mode, which reduces energy usage and make the protocol suitable for devices in power saving mode and extended Discontinuous Reception modes.
  • Support for both IP and non-IP data delivery transport which minimizes energy consumption.
  • Optimized performance in cellular-based IoT networks such as Narrowband-IoT and Long Term Evolution Cat-M.
  • Support for low-power wide area network binding.

LwM2M also meets the needs of enterprises that have to balance multiple factors — such as battery life, data rate, bandwidth, latency and costs — impacting their IoT services.

Who can benefit from the LwM2M protocol?

LwM2M is becoming important for enterprises and service providers alike because of its successful use  in IP and non-IP transports. It provides device management and service enablement capabilities for managing the entire lifecycle of an IoT device. LwM2M also introduces more efficient data formats, optimized message exchanges and support for application layer security based on Internet Engineering Task Force (IETF) Object Security for Constrained RESTful Environments (OSCORE).

What does the future hold?

As a technology, Lw2M2 is continually evolving. There’s an active OMA group that is constantly working on advancing the technology. The next specification release expected is a 1.2 version, which will provide support for many new things in a number of areas, such as supporting MQTT and HTTP; using IETF specification for end-to-end secured firmware updates; introduction of a dedicated gateway enabling group communication; and optimization efforts, such as registration templates and DTLS/TLS 1.3 support.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 14, 2019  2:34 PM

New-age intelligence systems for oil and gas operations

Abhishek Tandon Profile: Abhishek Tandon
Internet of Things, iot, IoT analytics, IoT data, IoT sensors, Machine learning

The oil and gas industry has been going through a tumultuous time of late. With volatile crude oil prices and geopolitical trends putting pressure on supply, it is becoming imperative for oil and gas companies to manage costs through operational effectiveness and minimize any production hurdles due to unplanned downtimes and unforeseen breakdowns.

Before making production decisions, organizations must understand the complex beast that is upstream operations with data points to analyze, including seismic and geological data to understand the ground conditions; oil quality data to determine gas oil ratio, water cut and submergibility; and pump calibration to ensure that it is optimized for the given conditions. Too much pressure on the pump and it is likely to break, too little pressure and it is being underutilized.

Technology is likely to be a top disruptor in the future of oil and gas operations for this very reason. IoT sensor data analytics and machine learning will enhance the machine interface and improve the effectiveness of brown-field setups. But what really comprises of a true intelligence system that is likely to disrupt this highly complex industry?

The new avatar of data analysis

There has never been a dearth of data usage in oil and gas operations. Even before data science became cool, there was a tremendous amount of statistical research that was being utilized to understand seismic and geological data and manage oil field operations efficiently. Data has always been the backbone of decision making in the oil and gas sector.

With the advent of data technologies that can handle scaling and machine learning to help operations teams and scientists make sense of the data, new-age intelligence systems are also starting to become top priorities in the long list of digital transformation initiatives.

Extracting the unknown unknowns

There are a number of prebuilt models that are used to determine the oil quality and calibrate well equipment. By feeding information into these models, field engineers have a good idea of the way the well is operating.

Machine learning starts to surgace the unknown unknowns. Machine learning makes the existing setup more sophisticated by analyzing multivariate patterns and anomalies that can be attributed to past failures. Moreover, the analysis patterns are derived from several years of data to reduce any inherent bias. Machine learning alone cannot answer all analysis questions. It is one piece of the puzzle and enhances existing knowledge acquired through years of research.

Constituents of a new-age intelligence system

The speed at which organizations receive data and conduct analysis is of the utmost importance. Hence, a sophisticated decision system needs to deliver insights quickly and with tremendous accuracy. A disruption in an oil well can cause a revenue loss as high as $1 million per day.

A true decision support system should have IoT infrastructure, real-time monitoring systems, supervised learning models and unsupervised learning models. IoT infrastructure includes low power sensors, gateways and communication setups to ensure that all aspects of well operations are connected and providing information in near real time. Real-time monitoring systems allow constant monitoring of the assets driven by key performance indicators and look for any issues or spikes that can be caught by the naked eye. Typical scenarios that real-time monitoring systems would cover include existing oil production, temperature and pressure of the well pumps and seismic activity around the well site.

Supervised learning models predict for known patterns and issues. These rely on past information of failures and models that have been honed over time in experimental and production setups. Organizations can use models for predictive maintenance of the pumps and pump optimization for higher productivity. Unsupervised learning models look for anomalies and possible signs of degradation. They utilize complex multivariate pattern data to determine correlations and possible deviations from normal behavior. Unsupervised models determine multivariate correlations between productivity and operational parameters using neural networks and identify early signs of pump degradation using time series analysis and anomaly detection to reduce the probability of a pump breakdown.

Components of an intelligence system. Source: Abhishek Tandon

It is difficult to rely on one type of system. Constant improvements require a combination of human intelligence and machine intelligence. Due to the plethora of prior knowledge available to run oil wells effectively, machine learning and big data technologies provide the right arsenal for these systems to become even more sophisticated. A new-age intelligence system becomes a combination of known knowledge through existing models and unknown patterns derived from machine learning.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 14, 2019  10:29 AM

Use network telemetry for improved IoT analytics

Jonah Kowall Profile: Jonah Kowall
Internet of Things, iot, IoT analytics, IoT data, IoT data management, IOT Network

Today’s solution for IoT analytics has primarily been through application instrumentation. This means the developer of the application inserts code, which sends back telemetry to some kind of monitoring or analytics platform. These solutions are most often SaaS or live in the public cloud. These are great methods when you have control over the code and have the knowledge of what and how to instrument. Oftentimes, people don’t have this prior knowledge. Another approach has been the application of packet capture technologies to IoT. However, due to the fact that so many IoT solutions leverage content delivery networks and public cloud, that approach doesn’t work particularly well due to large visibility gaps.

Some forward-thinking organizations have begun to use traffic data such as NetFlow, sFlow and IP Flow Information Export (IPFIX) to send back IoT information within a network flow. This has several advantages when used to capture IoT specific data. First, the data is standardized into industry-accepted formats, which I will get into later. The second is that once the data is captured from the gateway, it can be correlated with traffic data coming from the data center or cloud services in use. Today’s public cloud environments all have the ability to generate and export flow data, including the four examples listed below, which have been sorted by popularity.

  1. Amazon provides the Virtual Private Cloud (VPC)  Flow Log service. The service exports network traffic summaries — such as traffic levels, ports, network communication and other network-specific data — across AWS services on user-defined VPCs to understand how components communicate. The data is published to CloudWatch logs in JavaScript Object Notation (JSON) on a Simple Storage Service bucket or can be fed to other services such as Kinesis. The data contains basic network data about the communication flow and is published every 10 to 15 minutes. Unfortunately, Amazon’s service is a bit behind the other major cloud providers.
  2. Microsoft Azure provides the Network Security Group Flow Logs. This service similarly publishes the logs in a JSON format to Azure storage. The one difference — which improves upon Amazon’s implementation — is that Microsoft publishes the data in real-time, making it more useful operationally.
  3. Finally, Google is ahead of the pack on this data source. Google has created the VPC Flow Log service, which can be consumed by Stackdriver logging. Google does everything the others do, but most importantly, they also embed latency and performance data within the exported logs. The data is highly granular which makes it more useful, but it generates a lot of volume.

Tools for network-flow export

As you can see, there are many implementations. All of them provide a rich set of summarized data sets that are very useful for understanding how services interact, which services are most used and which applications consume network resources or answer requests. This data is valuable for countless operational and security use cases.

If you are implementing on a smaller device and want to collect data from the gateway or IoT  itself, there are lightweight network flow-export tools that can provide a lot of additional context on network traffic generated by the hardware. These agents can sit on Windows or Linux systems. Many of them will run on embedded Linux devices as well. Here are some options:

  1. nProbe has been around for a long time, and hence is very mature and heavily used. The company behind it has been tuning and expanding capabilities for over a decade. While nProbe was once free, it now costs money, but it has the ability to classify over 250 types of applications using deep packet inspection. These application types and latency information are embedded in the exported flow, which adds additional value to the flow. The solution can operate in both packet capture  mode and PF_RING mode to reduce the overhead on the operating system.
  2. kProbe is a Kentik product to do what nProbe does, which is to convert packet data from the network card to NetFlow or kFlow. While it doesn’t have as many application decodes, it’s free to use and highly efficient.
  3. SoftFlowd is a great open-source project, but it hasn’t had too many updates recently. Similar to the other solutions above, this small open-source agent converts packet data to flow data. The product has been tuned over many years and is highly efficient. It lacks a lot of application classification, but it does do some.
  4. NDSAD is a host-based agent that captures traffic from the interfaces and exports to NetFlow v5. It also supports more advanced capture methods for lower latency capture from the network card. This project doesn’t execute application classification, so the exported flow is less rich when it comes out as NetFlow.

Analyze flow data with these tools

Once these products are in place, there are many tools to analyze the output from them. Unlike tracing tools on the software side — which lock you into a specific implementation due to protocol differences in the network data sources — the data is standardized. This is the case in NetFlow, Simple Network Management Protocol (SNMP) and streaming telemetry, though it does contain fewer standards compared to the others.

While each vendor that makes network devices has its own analytics and management platform, they don’t support other vendors. Most environments are highly variable with many vendors and open-source components deployed. Each of the devices have different formats for NetFlow, but this is handled by flexible NetFlow templates and IPFIX. SNMP is handled via management information bases. Streaming telemetry is a new data type, but it lacks data taxonomy standards, which is a step back from SNMP. Tools that ingest any of this network data will normalize the data so the user doesn’t need to do that work. That means if you are using specific vendor implementations, you can avoid lock-in when you are using these data sources, particularly as the access will be standard once it’s in network-based analytics tools not made by vendors.

Aside from the vendor tools, there are more popular third-party tools, such as Kentik, and other open-source options. Most of them can handle NetFlow and other network data, but few  can handle the cloud-based flow log data too. In IoT, the scale is an important consideration, which causes problems with many of the older tools built on traditional databases. Common commercial tools to analyze flow data include those built by Solarwinds, ManageEngine, Plixer, Paessler and Kentik. I will highlight a few open-source analytics products, which are still actively maintained within the last five years.

  1. ntopng was designed by the same folks who made nProbe and Ntop. This product can take data from flow or packet data and does similar visualizations in a nice web-based user interface. This tool has been around for a long time and works great. However, it isn’t meant as a scalable analytics platform beyond a small number of low-volume hosts. It’s still a useful tool for those managing networks. It’s also suitable for those looking to gather data about what’s happening on the network and which devices are speaking to one another.
  2. Cflowd is a project by the Center for Applied Internet Data Analysis, which is a non-profit focused on gathering and analyzing data on the internet. This project is a good foundation for building a DIY analytics solution and is still maintained.
  3. sflowtool captures sFlow coming from various sources and can output text or binary data, which can be used to feed data into another tool. It can also convert the incoming data to NetFlow v5, which can be forwarded elsewhere. sFlow is a great data source, but not the most common. It contains data that Juniper generates from many of their devices.

As you can see, many of these analytics tools are not full-featured. More often than not, if an organization wants a free or open-source analytics solution, they end up using Elasticsearch, Logstash, and Kibana or Elastic Stack, which ends up having scalability issues when dealing with network data. This trend will progress quickly as the cloud creates unique requirements and constraints for organizations moving in that direction. We should see a lot more IoT projects using network data, as it’s a highly flexible and well-understood data source.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 13, 2019  1:25 PM

Disentangle IoT’s legal and ethical considerations

Helena Lisachuk Profile: Helena Lisachuk
Data governance, Internet of Things, iot, IoT compliance, IoT cybersecurity, iot privacy, IoT regulations, iot security

If you’ve ever watched a toddler eat spaghetti, you know just how messy it can get. The pasta tangle resists every effort, and sauce gets everywhere. Hands get involved. But as the child grows, they learn how to use tools more effectively. Within a few years, they can use a fork to tame the tangle and make quick, neat work of a meal.

I think this is good analogy for companies new to IoT solutions: they can find a tangle of compliance considerations they may not have expected. These might include legal and regulatory requirements, as well as ethical considerations around the use of IoT that may not be legally required, but good practice nonetheless. Companies with a global footprint have even more spaghetti on their plates, as they contend with each host country’s unique ruleset. Why is this?

The compelling strength of IoT lies in its ability to apply the power of the digital world to almost any problem in the physical world. This crossover means IoT touches rules made for each. An IoT-enabled insulin pump, for example, doesn’t just need to meet safety standards for a medical device; it also has to meet the privacy and cybersecurity standards of a digital tool, as well as respect and obey intellectual property laws. Then there are ethical considerations. Can you ensure that end users have truly informed consent as to how the device operates?

So how can organizations deploy IoT to achieve its benefits, while modeling responsible corporate citizenship at the same time? Just like fork for spaghetti, the answer is the same: use the right tool. In this case, the tool is design thinking. Consider framing current and upcoming laws and regulations as design constraints, then craft IoT solutions accordingly. With growing public awareness of ethical and privacy issues in the digital realm, organizations can’t afford for IoT design to be an afterthought. The first step? Get a clear understanding on what’s on the compliance plate.

The different strands of regulation

Generally speaking and not surprisingly, the wider the scope of an IoT solution, the greater the number of compliance considerations it’s likely to encounter. These commonly include:

  • Privacy and security. Since IoT’s sweet spot is collecting and analyzing massive volumes of data, perhaps the largest area of regulation is how to protect that data. There are multiple data privacy and security laws in multiple nations, each with a different impact on IoT solution design. Adding to the complexity: these laws can vary by industry – such as healthcare or energy — and requirements can vary widely even within a given region. For instance, while many know that the European Union’s (EU) General Data Protection Regulation (GDPR) regulates how many forms of data are collected and stored across the EU, some may not realize that GDPR isn’t necessarily uniform across the EU. Some aspects of those rules are left up to individual member states to define and implement.
  • Technical regulations. Technical IoT regulation can start at a level as granular as component technologies. While companies may not need to design the sensors or communication protocols they use, they should be aware of the regulations that govern them. For example, communication protocols using unlicensed spectrum may be difficult to use in certain areas, such as airports.
  • Intellectual property, export and trade compliance rules. IoT solutions that span national borders can raise difficult questions ranging from who owns intellectual property to how to comply with tariffs. In fact, moving certain types of data and information across borders can trigger laws on the export of controlled technology.
  • Workplace and labor. Legal and ethical concerns don’t just apply to customer-facing technologies. There are just as many regulatory issues for purely internal IoT applications. Solutions to improve workplace efficiency can touch regulations for gathering employee data, and how that data can — or can’t — be used in employment decisions or performance reviews.

Finding the right tool to untangle

When laid out in such a list, IoT’s potential legal and ethical considerations can seem daunting. The key to not being overwhelmed is to not ignore them. Start your assessment of legal and ethical considerations early in the design of an IoT solution. That way you can tailor the solution to the desired outcomes and you will not find yourself forced into costly changes during implementation.

Tools and expert advice at this early stage can also help understand what regulations impact your potential IoT use case. For example, Deloitte Netherlands has created a tool that can sift through the list of EUand national regulations and pull out those that are applicable to a given IoT solution. Such a list of applicable regulations can help to make clear the specific requirements that an IoT solution must meet, helping to tailor the hardware, software and governance decisions to suit.

Ethical IoT as a differentiator

Legal and regulatory compliance can often seem like a costly and tiresome burden, but breaches or the misuse of data can have real and staggering cost — both in dollars, and damage to reputation.

This fact is prompting some companies to take a different approach to IoT. Rather than viewing legal and ethical compliance as a burden, they’re looking to make ethics a competitive differentiator. Much like organic products have become a differentiator for some food brands, so too can a transparent and ethical approach to IoT be a differentiator, allowing customers to have confidence in a brand as a steward of their information collected via IoT.

Ethics can often seem like a scary prospect to companies. Get it wrong and you end up on in the news. But ethics really is about what people value, and that can be an incredibly powerful tool for companies. After all, if you understand what people value, you can deliver that value to them more easily. Understanding legal and ethical considerations of IoT is not just a compliance check, it is core requirement to doing IoT right.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 12, 2019  3:53 PM

How to develop edge computing solutions

Jason Shepherd Profile: Jason Shepherd
Edge computing, Internet of Things, iot, IoT architecture, IoT deployment, IoT edge, IoT edge computing, IoT hardware, IoT software, IoT strategy

This is the second part of a four-part series. Start with the first post here.

When considering how to architect for scale with edge computing solutions, it’s important to talk about both hardware and software in a system-level context. As a rule of thumb, needs on both fronts get more and more complex the closer you get to the device edge.

I created the chart below to help visualize this dynamic. The blue line represents hardware complexity and the green line indicates software complexity. The X-axis represents the continuum from the cloud down through various edge categories, ultimately ending at the device edge in the physical world.

Hardware and software customization

Figure 1: Inherent level of hardware and software customization from cloud to device edge. Source: Jason Shepherd of Dell Technologies

Hardware gets custom faster than software as you approach the device edge

There are a few key trends in this continuum that impact architecture and design decisions for IoT and edge computing. From the hardware lens, as you get into the remote field edges you need to consider elevated thermal support to run 24/7 in sealed networking cabinets, as well as potentially telco-specific equipment certifications.

Following the blue line further left from a traditional datacenter, note how hardware complexity starts to grow even faster than software. As you approach IoT and edge gateway-class compute, you begin to see needs for very specific I/O and connectivity protocols, many choices spanning Linux and Windows — what I call OS Soup — increasing ruggedization, specific shapes and form factors, and industry-specific features and certifications, such as Class 1, Division 2 for explosion proof.

The sharp ramp in complexity at the embedded and control edge

There’s a key inflection point for complexity at the embedded or control edge when hardware gets so constrained that software needs to be embedded, losing the flexibility of virtualization and containerization. Alternatively, the software requires a real-time operating system to address deterministic response needs, such as within programmable logic controllers on a factory floor and electronic control units in a vehicle. I call this inflection point the thin compute edge, and from there down to the device edge, the complexity curve ramps sharply up until you’re basically building custom hardware for every connected product.

Software consistency can be extended to the thin compute edge

Meanwhile, the software complexity curve — represented as the green line in Figure 1 — stays flatter a little longer, remaining consistent with established IT standards from the cloud down through telco edge and on-premises data centers until the first significant bump occurs with the aforementioned OS soup. The curve continues to stay relatively flat until you hit resource-constrained devices at the thin compute edge.

This inflection point is driven by total available memory — not CPU processing capability — and these days it’s generally about 512MB, which is enough to accommodate an OS and a minimum set of containerized applications to serve a meaningful purpose. The flexibility afforded by virtualization and containerization to maintain software-defined flexibility from the cloud to all the thin compute edges out there comes with a tax on footprint; however, this is a worthwhile tradeoff if any given device can support it. Eventually the software complexity curve reaches parity with the hardware curve at the extreme device edge, and you’re now creating custom embedded software for every device too.

Key considerations for edge computing solutions

We’ve established that both hardware and software inherently get more complex the closer you get to the device edge. Software stays more consistent a little longer, all the way down to the thin compute edge when available memory becomes a constraint and you have to go embedded. Here are some key considerations for developing edge computing infrastructure.

Extend cloud-native principles, such as platform-independent, loosely-coupled microservice software architecture, down to as close to the thin compute edge as possible. In doing so, you can maintain more consistent software practices across more edges, even when you inevitably need to go more custom for the hardware. The opportunity to bridge the software-hardware complexity gap close to the thin compute edge with more consistent software tools is represented by the yellow bar in Figure 1. Further, abstracting software into individual microservices — such as discrete functions — as much as possible enables you to easily migrate workloads up and down the edge to cloud continuum as needed. For example, in an initial deployment you may start with running an AI model in the cloud for simplicity, but as your data volume grows you’ll find that you need to push that model down to a compute node closer to the device edge to act on data in the moment and only backhaul meaningful data for retention or further batch analysis.

Leverage open interoperability frameworks like EdgeX Foundry for your various edge computing deployments. The Edge X framework extends cloud-native design principles all the way to the thin compute edge, providing flexibility while also unifying an open ecosystem of both commercial and open source value-add around the open API. Furthermore, there will be embedded commercial variants that compress the discrete platform microservices into a tiny C-based binary, so the code can run on highly-constrained devices or serve use cases that need deterministic real-time. There are inherent physics involved in the tradeoff between flexibility and performance, but even these compressed variants will still be able to take advantage of much of the plug-in value-add within the EdgeX ecosystem, such as device and application services for south- and north-bound data transmission. In all cases, with the open, vendor-neutral EdgeX API you can evolve solutions more readily with microservices written by third parties in the broader ecosystem.

Make sure your edge hardware is appropriately robust to handle the demands of the physical world for the deployed use case. A $30 maker board is great for a proof of concept (PoC) projects on the bench; however, it costs more than $100 when you fully package it in an enclosure in low volume, and it will quite possibly fail in a typically rugged field deployment since it wasn’t intended for these environments.

Speaking of robustness, consider leveraging virtualization, automated workload management and orchestration tools and redundant hardware to provide fault tolerance in mission-critical use cases. Probably not something you’re going to care about if your edge solution is monitoring a connected cat toy, but certainly worth consideration if downtime in your factory costs thousands if not tens of thousands of dollars a minute.

Overprovision the hardware that you deploy in the field in terms of I/O and compute capability. As long as you use software-defined technology as much as possible by extending cloud-native software design principles to capable edge devices and deployed devices have the necessary physical I/O and compute headroom, you can continuously update your edge functionality in the field as your needs inevitably evolve over time. If you don’t deploy the right I/O for future-proofing, you’re going to spend money on a truck roll which typically costs upwards of $750. In other words, how much does that maker board really cost?

Speaking of truck rolls, developers often overlook device management when starting an IoT project because naturally their first concern is their application. It’s important to really think about device management from the start, including not only how the health of your infrastructure will be monitored on an ongoing basis, but also how your deployed devices will be updated in the field at scale. When you’re doing a PoC in party of one to few, it’s easy enough to remote into each device individually to manage it through command lines, but try that for thousands, much less tens of thousands to millions of deployed devices. And the last thing you want to be doing is driving with USB sticks out to the sticks to update devices one by one manually.

Consider whether the infrastructure will be running on a LAN or WAN relative to the subscriber devices that access it. Note the break point in Figure 1. This makes a big difference in terms of tolerance for downtime in any given use case.

Modularize your hardware designs as much as possible, including with field-upgradable components. However, note that modularization can come with impact to cost and reliability since modular connections tend to be more failure-prone due to corrosion and vibration. In fact, it’s advisable to balance modularity with soldering down certain components – such as memory modules — on edge hardware that will run 24/7 in harsh environments.

Make sure your edge hardware has appropriate long-term support — typically a minimum of five years beyond the ship date. This applies to both the hardware and available supported OS options.

In general, plan on flexibility to address OS soup at thinner compute edges and both x86 and advanced RISC machines (ARM) based hardware. In Figure 1, the device edge is pretty much all ARM. This is another reason to leverage platform-independent — both silicon and OS — edge application frameworks.

Make sure to invest in root of trust (RoT) to the silicon level. RoT silicon, such as Trusted Platform Module, enables you to make sure your device attests that it is what it says it is and with secure boot that it is running the software that it should be running. This RoT is foundational for any good defense in-depth security strategy. Speaking of the aforementioned security usability, Intel and ARM’s collaboration on secure device onboarding is an important effort to facilitate trusted late binding of ownership to devices in a multi-party supply channel. This effort is gaining steam, including FIDO’s recent decision to launch an IoT track and make secure device onboarding its first standardization effort within.

Stay tuned for the next installments of this series in which I’ll dig deeper into the edge topic with pointers on sizing edge workloads, my three rules for Edge and IoT scale and eventually how we scale to the grail.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 12, 2019  1:22 PM

As IoT focuses on ease of access, vulnerability management suffers

Wayne Dorris Profile: Wayne Dorris
Internet of Things, iot, iot security, IoT standards, IoT strategy

The task of securing IoT devices requires herculean effort. Like Hercules fighting the Hydra, it often seems that for each vulnerability that gets patched, two more rise up to take its place. For all the good that IoT has done in our personal and professional lives, the reality is that innovation has continued to outpace security, and new IoT devices are still hitting the market without adequate security measures.

This comes as little surprise given the exponential growth that the IoT industry has enjoyed over the past decade. In 2009 there were less than a billion IoT devices in use, according to Statista. By 2020, that number is expected to grow to more than 20 billion. How can security controls keep up? How can IT teams accustomed to dealing with standard OSes like Windows, Linux and Unix adapt to the hundreds, even thousands, of different OSes utilized by IoT devices? Is it even possible to standardize security when the attack surface spans such a broad range of devices?

There are no easy answers, of course, but the task of vulnerability management isn’t going away. Thankfully, there are concrete steps that manufacturers, integrators and end users can take to help move the industry in the right direction.

Building a better baseline through education

One of the most pressing issues in IoT security is the lack of general knowledge. This knowledge gap represents a real problem, and addressing it is a key part of what will move the IoT industry forward and grow consumer confidence. It can be tricky for IT teams unfamiliar with the ins and outs of specific IoT devices to identify which vulnerabilities represent major problems and which don’t. If IT teams don’t understand the context in which a device operates, it can lead to drastic steps such as unnecessarily isolating a seemingly vulnerable device from the network.

The matter is compounded by the fact that most IT security departments also expect IoT devices to have the same security and mitigation controls as the enterprise servers that they put on their network. Most IoT devices are application-specific and have limited memory and computing power. They also rarely have the full OS loaded, and many of the security controls are also not available for mitigation. It’s important for end users to develop a network security baseline specific to IoT devices, rather than trying to take the IoT device and fitting it into their current network security guidelines.

Reputable manufacturers regularly issue patches to correct any vulnerabilities they have identified. In fact, most will even have a contact form where users can report potential vulnerabilities that the company has yet to patch. But it’s important to realize that these things take time. It takes an average of 38 days to patch a vulnerability, according to tCell’s “Security Report for In-Production Web Applications,” but savvy attackers know that most organizations won’t install a patch the day it becomes available.

In my experience, it generally takes enterprises between 120 and 180 days to actually install a patch. This creates a window during which many attackers will attempt to use the unsecured device to infiltrate a network. Helping users understand the importance of immediate patching can help mitigate this issue. To make matters worse, attackers have become faster than ever at exploiting these vulnerabilities. Recent research from Gartner indicates that the average time between a vulnerability being reporting to the time it is exploited is just 7.72 days in 2017, a dramatic drop from 13.5 days in 2016 and 25.4 days from 2008-2015. The window of opportunity for attackers is bigger than ever.

Similar education is needed regarding product life spans. Responsible companies will generally attempt to patch older products for as long as they can, but at a certain point every device becomes obsolete. Many devices reach a point where there is no longer enough space on the device for the installation of a patch meant for a newer product. The fact is that, the longer a device is on the network, the more vulnerable it becomes. In this way, product longevity can actually become a negative because it can cause vulnerable devices to remain connected to a network long past the date that the manufacturer stops supporting them. Helping IT teams gain a firmer understanding of the intended life span of a product can lessen this problem as well.

How can certifications help?

Another way to help close the knowledge gap is through certifications. These days, certifications are everywhere. From car companies to lightbulb manufacturers, it’s hard to find a consumer product that isn’t certified by some regulatory board or another. But for some reason, IoT devices have largely escaped this excess of certification, resulting in a market that is flooded with devices that can be difficult to distinguish from one another. This is a problem, particularly for customers searching for devices like connected surveillance cameras where evaluating available security options is an obvious priority.

Thankfully, this has already begun to change as more manufacturers embrace the idea of third-party certification. Customers are growing more discerning as they become better informed, and requests for proposals today often specifically ask about certifications and recent audits. Customers increasingly want to verify that they’re working with a responsible company that will stand by their products, manage vulnerabilities and issue patches as needed. These certifications have finally given them a way to do it.

It’s something of a self-fulfilling prophecy; the more responsible companies buy into the idea of third-party validation, the more exposure customers have to that validation and the more trustworthy it becomes. This type of symbiotic relationship benefits everyone, but the lack of network security baseline standards for IoT devices means it will remain an uphill climb in the short term. I am hopeful that an international organization will develop an IoT certification that can be globally recognized, unifying the many regional certifications that enterprises must currently navigate.

GDPR sets the standard

Legislation is another important part of the equation, although the U.S. currently lacks comprehensive breach notification regulations on a federal level. Instead, the U.S. allows individual states to create their own guidelines. The resulting mishmash of laws and statues has created a difficult environment for organizations operating across state lines, as it can be difficult to know when it’s necessary to disclose a breach or vulnerability to users. The National Conference of State Legislatures provides a handy guide that illustrates just how varied these regulations can be.

But fear not, because there is hope. The E.U.’s much-discussed General Data Protection Regulation (GDPR) represents perhaps the most sweeping change to international privacy law in history. GDPR grants individuals greater control over their personal information while unifying Europe’s data protection regulations under a single, easier-to-understand umbrella.

The most relevant section of GDPR for the purposes of vulnerability management is Article 25, which states that companies processing personal data, such as manufacturers of IoT devices, must have appropriate data protection measures in place. Rather than attempt to implement specific security measures that would quickly become obsolete as technology advances, GDPR instead outlines the mindset with which companies must approach the problem.

For now, GDPR only applies to companies operating in Europe, but savvy manufacturers are already anticipating the enactment of similar regulations elsewhere throughout the world. The winds of change are blowing toward greater security, and manufacturers must recognize that in the public mind, they bear the majority of the responsibility for vulnerability management. Integrators and contractors are often overlooked in U.S. regulations, which can put manufacturers in a difficult position, which is an issue that GDPR’s Secure by Default requirement has addressed by allowing integrators to be fined for failure to properly install or configure otherwise secure equipment.

This further underscores the importance of education initiatives and the fact that integrators and contractors must be included in those efforts. Manufacturers are often accused of making their devices too open, an accusation that overlooks the fact that ease of access is one of IoT’s biggest selling points. What’s important is having appropriate controls in place, and manufacturers have expressed frustration with the fact that the integrators putting their products into place have failed to understand how to appropriately apply those controls in the best interest of the customer. After all, what good does it do to design a product with security in place by design and default if the customer is never made aware that the protections exist? By ensuring that integrators have a more intimate understanding of IoT devices, this problem can be mitigated.

Working toward a more secure future

The rapid growth of IoT appears unlikely to subside anytime soon. Innovative new devices will continue to enter the market, providing exciting new tools and resources across a broad range of industries. But with these tools will come security challenges, and manufacturers, integrators and users must all be prepared to do their part to address them.

Legislation will come and certifications will grow in importance, but the key to effective vulnerability management is — and will remain — education. From available security controls to life cycle management, each party has a role to play and each must understand the steps they can take to improve device security.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: