IoT Agenda

August 19, 2019  4:12 PM

Lightweight Machine-to-Machine technology emerges in IoT networks

William Yan Profile: William Yan
Cat-M, Internet of Things, iot, IOT Network, IoT protocols, IoT wireless, M2M, Narrowband IoT

Last year’s report published by Gartner Research cited that connected things in use would hit 14.2 billion in 2019 with exponential growth in the years thereafter. IoT is garnering lots of attention, and a lot organizations are considering and designing many IoT services and technologies. That being said, one of the key IoT-focused emerging technologies is Lightweight Machine-to-Machine (LwM2M) protocol, which is a device communication and management protocol specifically designed for IoT services.

What is LwM2M?

The standard protocol is published and maintained by the Open Mobile Alliance (OMA) organization. It was first released in February 2017 and initially designed for constrained devices with radio uplink. As it stands now, LwM2M is a rather mature protocol and has been around for more than five years. Within those five years, it has gone through four versions of specifications and has been tested in eight test fests organized by OMA. Compared to other IoT device management specifications, one can say that the protocol is starting to gain wide market recognition.

Lightweight M2M components

The standard Lightweight M2M components and its technology stack. Source: AVSystem.

Lightweight M2M is often compared to Message Queuing Telemetry Transport (MQTT), another IoT protocol that is arguably the most popular device communication protocol in IoT services. MQTT is maintained by the International Organization for Standardization organization and is a publish-subscribe  messaging protocol. As such, it requires a message broker for data communication.

The protocol comes with a well-defined data model representing specific service profiles, such as connectivity monitoring, temperature reading and firmware updates. Thanks to its well-defined data model structure, the standard enables common, generic, vendor-neutral and implementation-agnostic features, such as secure device bootstrapping, client registration, object and resource access, and device reporting. These mechanisms greatly reduce technology fragmentation and decrease potential interoperability errors.

What are the major advantages of LwM2M?

LwM2M is gaining recognition and starting to be adopted for facilitating IoT deployments due to its specific benefits. These include the following:

  • . The lightweight protocol guarantees ultra low link utilization.
  • Working over links with a small data frame and high latency, as applicable to most IoT use cases.
  • Greater power efficiency through Datagram Transport Layer Security (DTLS) resumption and Queue Mode, which reduces energy usage and make the protocol suitable for devices in power saving mode and extended Discontinuous Reception modes.
  • Support for both IP and non-IP data delivery transport which minimizes energy consumption.
  • Optimized performance in cellular-based IoT networks such as Narrowband-IoT and Long Term Evolution Cat-M.
  • Support for low-power wide area network binding.

LwM2M also meets the needs of enterprises that have to balance multiple factors — such as battery life, data rate, bandwidth, latency and costs — impacting their IoT services.

Who can benefit from the LwM2M protocol?

Lightweight M2M is becoming important for enterprises and service providers alike because of its successful use  in IP and non-IP transports. It provides device management and service enablement capabilities for managing the entire lifecycle of an IoT device. The protocol also introduces more efficient data formats, optimized message exchanges and support for application layer security based on Internet Engineering Task Force (IETF) Object Security for Constrained RESTful Environments (OSCORE).

What does the future hold?

As a technology, Lightweight M2M is continually evolving. There’s an active OMA group that is constantly working on advancing the technology. The next specification release expected is a 1.2 version, which will provide support for many new things in a number of areas, such as supporting MQTT and HTTP; using IETF specification for end-to-end secured firmware updates; introduction of a dedicated gateway enabling group communication; and optimization efforts, such as registration templates and DTLS/TLS 1.3 support.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 14, 2019  2:34 PM

New-age intelligence systems for oil and gas operations

Abhishek Tandon Profile: Abhishek Tandon
Internet of Things, iot, IoT analytics, IoT data, IoT sensors, Machine learning

The oil and gas industry has been going through a tumultuous time of late. With volatile crude oil prices and geopolitical trends putting pressure on supply, it is becoming imperative for oil and gas companies to manage costs through operational effectiveness and minimize any production hurdles due to unplanned downtimes and unforeseen breakdowns.

Before making production decisions, organizations must understand the complex beast that is upstream operations with data points to analyze, including seismic and geological data to understand the ground conditions; oil quality data to determine gas oil ratio, water cut and submergibility; and pump calibration to ensure that it is optimized for the given conditions. Too much pressure on the pump and it is likely to break, too little pressure and it is being underutilized.

Technology is likely to be a top disruptor in the future of oil and gas operations for this very reason. IoT sensor data analytics and machine learning will enhance the machine interface and improve the effectiveness of brown-field setups. But what really comprises of a true intelligence system that is likely to disrupt this highly complex industry?

The new avatar of data analysis

There has never been a dearth of data usage in oil and gas operations. Even before data science became cool, there was a tremendous amount of statistical research that was being utilized to understand seismic and geological data and manage oil field operations efficiently. Data has always been the backbone of decision making in the oil and gas sector.

With the advent of data technologies that can handle scaling and machine learning to help operations teams and scientists make sense of the data, new-age intelligence systems are also starting to become top priorities in the long list of digital transformation initiatives.

Extracting the unknown unknowns

There are a number of prebuilt models that are used to determine the oil quality and calibrate well equipment. By feeding information into these models, field engineers have a good idea of the way the well is operating.

Machine learning starts to surgace the unknown unknowns. Machine learning makes the existing setup more sophisticated by analyzing multivariate patterns and anomalies that can be attributed to past failures. Moreover, the analysis patterns are derived from several years of data to reduce any inherent bias. Machine learning alone cannot answer all analysis questions. It is one piece of the puzzle and enhances existing knowledge acquired through years of research.

Constituents of a new-age intelligence system

The speed at which organizations receive data and conduct analysis is of the utmost importance. Hence, a sophisticated decision system needs to deliver insights quickly and with tremendous accuracy. A disruption in an oil well can cause a revenue loss as high as $1 million per day.

A true decision support system should have IoT infrastructure, real-time monitoring systems, supervised learning models and unsupervised learning models. IoT infrastructure includes low power sensors, gateways and communication setups to ensure that all aspects of well operations are connected and providing information in near real time. Real-time monitoring systems allow constant monitoring of the assets driven by key performance indicators and look for any issues or spikes that can be caught by the naked eye. Typical scenarios that real-time monitoring systems would cover include existing oil production, temperature and pressure of the well pumps and seismic activity around the well site.

Supervised learning models predict for known patterns and issues. These rely on past information of failures and models that have been honed over time in experimental and production setups. Organizations can use models for predictive maintenance of the pumps and pump optimization for higher productivity. Unsupervised learning models look for anomalies and possible signs of degradation. They utilize complex multivariate pattern data to determine correlations and possible deviations from normal behavior. Unsupervised models determine multivariate correlations between productivity and operational parameters using neural networks and identify early signs of pump degradation using time series analysis and anomaly detection to reduce the probability of a pump breakdown.

Components of an intelligence system. Source: Abhishek Tandon

It is difficult to rely on one type of system. Constant improvements require a combination of human intelligence and machine intelligence. Due to the plethora of prior knowledge available to run oil wells effectively, machine learning and big data technologies provide the right arsenal for these systems to become even more sophisticated. A new-age intelligence system becomes a combination of known knowledge through existing models and unknown patterns derived from machine learning.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 14, 2019  10:29 AM

Use network telemetry for improved IoT analytics

Jonah Kowall Profile: Jonah Kowall
Internet of Things, iot, IoT analytics, IoT data, IoT data management, IOT Network

Today’s solution for IoT analytics has primarily been through application instrumentation. This means the developer of the application inserts code, which sends back telemetry to some kind of monitoring or analytics platform. These solutions are most often SaaS or live in the public cloud. These are great methods when you have control over the code and have the knowledge of what and how to instrument. Oftentimes, people don’t have this prior knowledge. Another approach has been the application of packet capture technologies to IoT. However, due to the fact that so many IoT solutions leverage content delivery networks and public cloud, that approach doesn’t work particularly well due to large visibility gaps.

Some forward-thinking organizations have begun to use traffic data such as NetFlow, sFlow and IP Flow Information Export (IPFIX) to send back IoT information within a network flow. This has several advantages when used to capture IoT specific data. First, the data is standardized into industry-accepted formats, which I will get into later. The second is that once the data is captured from the gateway, it can be correlated with traffic data coming from the data center or cloud services in use. Today’s public cloud environments all have the ability to generate and export flow data, including the four examples listed below, which have been sorted by popularity.

  1. Amazon provides the Virtual Private Cloud (VPC)  Flow Log service. The service exports network traffic summaries — such as traffic levels, ports, network communication and other network-specific data — across AWS services on user-defined VPCs to understand how components communicate. The data is published to CloudWatch logs in JavaScript Object Notation (JSON) on a Simple Storage Service bucket or can be fed to other services such as Kinesis. The data contains basic network data about the communication flow and is published every 10 to 15 minutes. Unfortunately, Amazon’s service is a bit behind the other major cloud providers.
  2. Microsoft Azure provides the Network Security Group Flow Logs. This service similarly publishes the logs in a JSON format to Azure storage. The one difference — which improves upon Amazon’s implementation — is that Microsoft publishes the data in real-time, making it more useful operationally.
  3. Finally, Google is ahead of the pack on this data source. Google has created the VPC Flow Log service, which can be consumed by Stackdriver logging. Google does everything the others do, but most importantly, they also embed latency and performance data within the exported logs. The data is highly granular which makes it more useful, but it generates a lot of volume.

Tools for network-flow export

As you can see, there are many implementations. All of them provide a rich set of summarized data sets that are very useful for understanding how services interact, which services are most used and which applications consume network resources or answer requests. This data is valuable for countless operational and security use cases.

If you are implementing on a smaller device and want to collect data from the gateway or IoT  itself, there are lightweight network flow-export tools that can provide a lot of additional context on network traffic generated by the hardware. These agents can sit on Windows or Linux systems. Many of them will run on embedded Linux devices as well. Here are some options:

  1. nProbe has been around for a long time, and hence is very mature and heavily used. The company behind it has been tuning and expanding capabilities for over a decade. While nProbe was once free, it now costs money, but it has the ability to classify over 250 types of applications using deep packet inspection. These application types and latency information are embedded in the exported flow, which adds additional value to the flow. The solution can operate in both packet capture  mode and PF_RING mode to reduce the overhead on the operating system.
  2. kProbe is a Kentik product to do what nProbe does, which is to convert packet data from the network card to NetFlow or kFlow. While it doesn’t have as many application decodes, it’s free to use and highly efficient.
  3. SoftFlowd is a great open-source project, but it hasn’t had too many updates recently. Similar to the other solutions above, this small open-source agent converts packet data to flow data. The product has been tuned over many years and is highly efficient. It lacks a lot of application classification, but it does do some.
  4. NDSAD is a host-based agent that captures traffic from the interfaces and exports to NetFlow v5. It also supports more advanced capture methods for lower latency capture from the network card. This project doesn’t execute application classification, so the exported flow is less rich when it comes out as NetFlow.

Analyze flow data with these tools

Once these products are in place, there are many tools to analyze the output from them. Unlike tracing tools on the software side — which lock you into a specific implementation due to protocol differences in the network data sources — the data is standardized. This is the case in NetFlow, Simple Network Management Protocol (SNMP) and streaming telemetry, though it does contain fewer standards compared to the others.

While each vendor that makes network devices has its own analytics and management platform, they don’t support other vendors. Most environments are highly variable with many vendors and open-source components deployed. Each of the devices have different formats for NetFlow, but this is handled by flexible NetFlow templates and IPFIX. SNMP is handled via management information bases. Streaming telemetry is a new data type, but it lacks data taxonomy standards, which is a step back from SNMP. Tools that ingest any of this network data will normalize the data so the user doesn’t need to do that work. That means if you are using specific vendor implementations, you can avoid lock-in when you are using these data sources, particularly as the access will be standard once it’s in network-based analytics tools not made by vendors.

Aside from the vendor tools, there are more popular third-party tools, such as Kentik, and other open-source options. Most of them can handle NetFlow and other network data, but few  can handle the cloud-based flow log data too. In IoT, the scale is an important consideration, which causes problems with many of the older tools built on traditional databases. Common commercial tools to analyze flow data include those built by Solarwinds, ManageEngine, Plixer, Paessler and Kentik. I will highlight a few open-source analytics products, which are still actively maintained within the last five years.

  1. ntopng was designed by the same folks who made nProbe and Ntop. This product can take data from flow or packet data and does similar visualizations in a nice web-based user interface. This tool has been around for a long time and works great. However, it isn’t meant as a scalable analytics platform beyond a small number of low-volume hosts. It’s still a useful tool for those managing networks. It’s also suitable for those looking to gather data about what’s happening on the network and which devices are speaking to one another.
  2. Cflowd is a project by the Center for Applied Internet Data Analysis, which is a non-profit focused on gathering and analyzing data on the internet. This project is a good foundation for building a DIY analytics solution and is still maintained.
  3. sflowtool captures sFlow coming from various sources and can output text or binary data, which can be used to feed data into another tool. It can also convert the incoming data to NetFlow v5, which can be forwarded elsewhere. sFlow is a great data source, but not the most common. It contains data that Juniper generates from many of their devices.

As you can see, many of these analytics tools are not full-featured. More often than not, if an organization wants a free or open-source analytics solution, they end up using Elasticsearch, Logstash, and Kibana or Elastic Stack, which ends up having scalability issues when dealing with network data. This trend will progress quickly as the cloud creates unique requirements and constraints for organizations moving in that direction. We should see a lot more IoT projects using network data, as it’s a highly flexible and well-understood data source.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 13, 2019  1:25 PM

Disentangle IoT’s legal and ethical considerations

Helena Lisachuk Profile: Helena Lisachuk
Data governance, Internet of Things, iot, IoT compliance, IoT cybersecurity, iot privacy, IoT regulations, iot security

If you’ve ever watched a toddler eat spaghetti, you know just how messy it can get. The pasta tangle resists every effort, and sauce gets everywhere. Hands get involved. But as the child grows, they learn how to use tools more effectively. Within a few years, they can use a fork to tame the tangle and make quick, neat work of a meal.

I think this is good analogy for companies new to IoT solutions: they can find a tangle of compliance considerations they may not have expected. These might include legal and regulatory requirements, as well as ethical considerations around the use of IoT that may not be legally required, but good practice nonetheless. Companies with a global footprint have even more spaghetti on their plates, as they contend with each host country’s unique ruleset. Why is this?

The compelling strength of IoT lies in its ability to apply the power of the digital world to almost any problem in the physical world. This crossover means IoT touches rules made for each. An IoT-enabled insulin pump, for example, doesn’t just need to meet safety standards for a medical device; it also has to meet the privacy and cybersecurity standards of a digital tool, as well as respect and obey intellectual property laws. Then there are ethical considerations. Can you ensure that end users have truly informed consent as to how the device operates?

So how can organizations deploy IoT to achieve its benefits, while modeling responsible corporate citizenship at the same time? Just like fork for spaghetti, the answer is the same: use the right tool. In this case, the tool is design thinking. Consider framing current and upcoming laws and regulations as design constraints, then craft IoT solutions accordingly. With growing public awareness of ethical and privacy issues in the digital realm, organizations can’t afford for IoT design to be an afterthought. The first step? Get a clear understanding on what’s on the compliance plate.

The different strands of regulation

Generally speaking and not surprisingly, the wider the scope of an IoT solution, the greater the number of compliance considerations it’s likely to encounter. These commonly include:

  • Privacy and security. Since IoT’s sweet spot is collecting and analyzing massive volumes of data, perhaps the largest area of regulation is how to protect that data. There are multiple data privacy and security laws in multiple nations, each with a different impact on IoT solution design. Adding to the complexity: these laws can vary by industry – such as healthcare or energy — and requirements can vary widely even within a given region. For instance, while many know that the European Union’s (EU) General Data Protection Regulation (GDPR) regulates how many forms of data are collected and stored across the EU, some may not realize that GDPR isn’t necessarily uniform across the EU. Some aspects of those rules are left up to individual member states to define and implement.
  • Technical regulations. Technical IoT regulation can start at a level as granular as component technologies. While companies may not need to design the sensors or communication protocols they use, they should be aware of the regulations that govern them. For example, communication protocols using unlicensed spectrum may be difficult to use in certain areas, such as airports.
  • Intellectual property, export and trade compliance rules. IoT solutions that span national borders can raise difficult questions ranging from who owns intellectual property to how to comply with tariffs. In fact, moving certain types of data and information across borders can trigger laws on the export of controlled technology.
  • Workplace and labor. Legal and ethical concerns don’t just apply to customer-facing technologies. There are just as many regulatory issues for purely internal IoT applications. Solutions to improve workplace efficiency can touch regulations for gathering employee data, and how that data can — or can’t — be used in employment decisions or performance reviews.

Finding the right tool to untangle

When laid out in such a list, IoT’s potential legal and ethical considerations can seem daunting. The key to not being overwhelmed is to not ignore them. Start your assessment of legal and ethical considerations early in the design of an IoT solution. That way you can tailor the solution to the desired outcomes and you will not find yourself forced into costly changes during implementation.

Tools and expert advice at this early stage can also help understand what regulations impact your potential IoT use case. For example, Deloitte Netherlands has created a tool that can sift through the list of EUand national regulations and pull out those that are applicable to a given IoT solution. Such a list of applicable regulations can help to make clear the specific requirements that an IoT solution must meet, helping to tailor the hardware, software and governance decisions to suit.

Ethical IoT as a differentiator

Legal and regulatory compliance can often seem like a costly and tiresome burden, but breaches or the misuse of data can have real and staggering cost — both in dollars, and damage to reputation.

This fact is prompting some companies to take a different approach to IoT. Rather than viewing legal and ethical compliance as a burden, they’re looking to make ethics a competitive differentiator. Much like organic products have become a differentiator for some food brands, so too can a transparent and ethical approach to IoT be a differentiator, allowing customers to have confidence in a brand as a steward of their information collected via IoT.

Ethics can often seem like a scary prospect to companies. Get it wrong and you end up on in the news. But ethics really is about what people value, and that can be an incredibly powerful tool for companies. After all, if you understand what people value, you can deliver that value to them more easily. Understanding legal and ethical considerations of IoT is not just a compliance check, it is core requirement to doing IoT right.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 12, 2019  3:53 PM

How to develop edge computing solutions

Jason Shepherd Profile: Jason Shepherd
Edge computing, Internet of Things, iot, IoT architecture, IoT deployment, IoT edge, IoT edge computing, IoT hardware, IoT software, IoT strategy

This is the second part of a four-part series. Start with the first post here.

When considering how to architect for scale with edge computing solutions, it’s important to talk about both hardware and software in a system-level context. As a rule of thumb, needs on both fronts get more and more complex the closer you get to the device edge.

I created the chart below to help visualize this dynamic. The blue line represents hardware complexity and the green line indicates software complexity. The X-axis represents the continuum from the cloud down through various edge categories, ultimately ending at the device edge in the physical world.

Hardware and software customization

Figure 1: Inherent level of hardware and software customization from cloud to device edge. Source: Jason Shepherd of Dell Technologies

Hardware gets custom faster than software as you approach the device edge

There are a few key trends in this continuum that impact architecture and design decisions for IoT and edge computing. From the hardware lens, as you get into the remote field edges you need to consider elevated thermal support to run 24/7 in sealed networking cabinets, as well as potentially telco-specific equipment certifications.

Following the blue line further left from a traditional datacenter, note how hardware complexity starts to grow even faster than software. As you approach IoT and edge gateway-class compute, you begin to see needs for very specific I/O and connectivity protocols, many choices spanning Linux and Windows — what I call OS Soup — increasing ruggedization, specific shapes and form factors, and industry-specific features and certifications, such as Class 1, Division 2 for explosion proof.

The sharp ramp in complexity at the embedded and control edge

There’s a key inflection point for complexity at the embedded or control edge when hardware gets so constrained that software needs to be embedded, losing the flexibility of virtualization and containerization. Alternatively, the software requires a real-time operating system to address deterministic response needs, such as within programmable logic controllers on a factory floor and electronic control units in a vehicle. I call this inflection point the thin compute edge, and from there down to the device edge, the complexity curve ramps sharply up until you’re basically building custom hardware for every connected product.

Software consistency can be extended to the thin compute edge

Meanwhile, the software complexity curve — represented as the green line in Figure 1 — stays flatter a little longer, remaining consistent with established IT standards from the cloud down through telco edge and on-premises data centers until the first significant bump occurs with the aforementioned OS soup. The curve continues to stay relatively flat until you hit resource-constrained devices at the thin compute edge.

This inflection point is driven by total available memory — not CPU processing capability — and these days it’s generally about 512MB, which is enough to accommodate an OS and a minimum set of containerized applications to serve a meaningful purpose. The flexibility afforded by virtualization and containerization to maintain software-defined flexibility from the cloud to all the thin compute edges out there comes with a tax on footprint; however, this is a worthwhile tradeoff if any given device can support it. Eventually the software complexity curve reaches parity with the hardware curve at the extreme device edge, and you’re now creating custom embedded software for every device too.

Key considerations for edge computing solutions

We’ve established that both hardware and software inherently get more complex the closer you get to the device edge. Software stays more consistent a little longer, all the way down to the thin compute edge when available memory becomes a constraint and you have to go embedded. Here are some key considerations for developing edge computing infrastructure.

Extend cloud-native principles, such as platform-independent, loosely-coupled microservice software architecture, down to as close to the thin compute edge as possible. In doing so, you can maintain more consistent software practices across more edges, even when you inevitably need to go more custom for the hardware. The opportunity to bridge the software-hardware complexity gap close to the thin compute edge with more consistent software tools is represented by the yellow bar in Figure 1. Further, abstracting software into individual microservices — such as discrete functions — as much as possible enables you to easily migrate workloads up and down the edge to cloud continuum as needed. For example, in an initial deployment you may start with running an AI model in the cloud for simplicity, but as your data volume grows you’ll find that you need to push that model down to a compute node closer to the device edge to act on data in the moment and only backhaul meaningful data for retention or further batch analysis.

Leverage open interoperability frameworks like EdgeX Foundry for your various edge computing deployments. The Edge X framework extends cloud-native design principles all the way to the thin compute edge, providing flexibility while also unifying an open ecosystem of both commercial and open source value-add around the open API. Furthermore, there will be embedded commercial variants that compress the discrete platform microservices into a tiny C-based binary, so the code can run on highly-constrained devices or serve use cases that need deterministic real-time. There are inherent physics involved in the tradeoff between flexibility and performance, but even these compressed variants will still be able to take advantage of much of the plug-in value-add within the EdgeX ecosystem, such as device and application services for south- and north-bound data transmission. In all cases, with the open, vendor-neutral EdgeX API you can evolve solutions more readily with microservices written by third parties in the broader ecosystem.

Make sure your edge hardware is appropriately robust to handle the demands of the physical world for the deployed use case. A $30 maker board is great for a proof of concept (PoC) projects on the bench; however, it costs more than $100 when you fully package it in an enclosure in low volume, and it will quite possibly fail in a typically rugged field deployment since it wasn’t intended for these environments.

Speaking of robustness, consider leveraging virtualization, automated workload management and orchestration tools and redundant hardware to provide fault tolerance in mission-critical use cases. Probably not something you’re going to care about if your edge solution is monitoring a connected cat toy, but certainly worth consideration if downtime in your factory costs thousands if not tens of thousands of dollars a minute.

Overprovision the hardware that you deploy in the field in terms of I/O and compute capability. As long as you use software-defined technology as much as possible by extending cloud-native software design principles to capable edge devices and deployed devices have the necessary physical I/O and compute headroom, you can continuously update your edge functionality in the field as your needs inevitably evolve over time. If you don’t deploy the right I/O for future-proofing, you’re going to spend money on a truck roll which typically costs upwards of $750. In other words, how much does that maker board really cost?

Speaking of truck rolls, developers often overlook device management when starting an IoT project because naturally their first concern is their application. It’s important to really think about device management from the start, including not only how the health of your infrastructure will be monitored on an ongoing basis, but also how your deployed devices will be updated in the field at scale. When you’re doing a PoC in party of one to few, it’s easy enough to remote into each device individually to manage it through command lines, but try that for thousands, much less tens of thousands to millions of deployed devices. And the last thing you want to be doing is driving with USB sticks out to the sticks to update devices one by one manually.

Consider whether the infrastructure will be running on a LAN or WAN relative to the subscriber devices that access it. Note the break point in Figure 1. This makes a big difference in terms of tolerance for downtime in any given use case.

Modularize your hardware designs as much as possible, including with field-upgradable components. However, note that modularization can come with impact to cost and reliability since modular connections tend to be more failure-prone due to corrosion and vibration. In fact, it’s advisable to balance modularity with soldering down certain components – such as memory modules — on edge hardware that will run 24/7 in harsh environments.

Make sure your edge hardware has appropriate long-term support — typically a minimum of five years beyond the ship date. This applies to both the hardware and available supported OS options.

In general, plan on flexibility to address OS soup at thinner compute edges and both x86 and advanced RISC machines (ARM) based hardware. In Figure 1, the device edge is pretty much all ARM. This is another reason to leverage platform-independent — both silicon and OS — edge application frameworks.

Make sure to invest in root of trust (RoT) to the silicon level. RoT silicon, such as Trusted Platform Module, enables you to make sure your device attests that it is what it says it is and with secure boot that it is running the software that it should be running. This RoT is foundational for any good defense in-depth security strategy. Speaking of the aforementioned security usability, Intel and ARM’s collaboration on secure device onboarding is an important effort to facilitate trusted late binding of ownership to devices in a multi-party supply channel. This effort is gaining steam, including FIDO’s recent decision to launch an IoT track and make secure device onboarding its first standardization effort within.

Stay tuned for the next installments of this series in which I’ll dig deeper into the edge topic with pointers on sizing edge workloads, my three rules for Edge and IoT scale and eventually how we scale to the grail.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 12, 2019  1:22 PM

As IoT focuses on ease of access, vulnerability management suffers

Wayne Dorris Profile: Wayne Dorris
Internet of Things, iot, iot security, IoT standards, IoT strategy

The task of securing IoT devices requires herculean effort. Like Hercules fighting the Hydra, it often seems that for each vulnerability that gets patched, two more rise up to take its place. For all the good that IoT has done in our personal and professional lives, the reality is that innovation has continued to outpace security, and new IoT devices are still hitting the market without adequate security measures.

This comes as little surprise given the exponential growth that the IoT industry has enjoyed over the past decade. In 2009 there were less than a billion IoT devices in use, according to Statista. By 2020, that number is expected to grow to more than 20 billion. How can security controls keep up? How can IT teams accustomed to dealing with standard OSes like Windows, Linux and Unix adapt to the hundreds, even thousands, of different OSes utilized by IoT devices? Is it even possible to standardize security when the attack surface spans such a broad range of devices?

There are no easy answers, of course, but the task of vulnerability management isn’t going away. Thankfully, there are concrete steps that manufacturers, integrators and end users can take to help move the industry in the right direction.

Building a better baseline through education

One of the most pressing issues in IoT security is the lack of general knowledge. This knowledge gap represents a real problem, and addressing it is a key part of what will move the IoT industry forward and grow consumer confidence. It can be tricky for IT teams unfamiliar with the ins and outs of specific IoT devices to identify which vulnerabilities represent major problems and which don’t. If IT teams don’t understand the context in which a device operates, it can lead to drastic steps such as unnecessarily isolating a seemingly vulnerable device from the network.

The matter is compounded by the fact that most IT security departments also expect IoT devices to have the same security and mitigation controls as the enterprise servers that they put on their network. Most IoT devices are application-specific and have limited memory and computing power. They also rarely have the full OS loaded, and many of the security controls are also not available for mitigation. It’s important for end users to develop a network security baseline specific to IoT devices, rather than trying to take the IoT device and fitting it into their current network security guidelines.

Reputable manufacturers regularly issue patches to correct any vulnerabilities they have identified. In fact, most will even have a contact form where users can report potential vulnerabilities that the company has yet to patch. But it’s important to realize that these things take time. It takes an average of 38 days to patch a vulnerability, according to tCell’s “Security Report for In-Production Web Applications,” but savvy attackers know that most organizations won’t install a patch the day it becomes available.

In my experience, it generally takes enterprises between 120 and 180 days to actually install a patch. This creates a window during which many attackers will attempt to use the unsecured device to infiltrate a network. Helping users understand the importance of immediate patching can help mitigate this issue. To make matters worse, attackers have become faster than ever at exploiting these vulnerabilities. Recent research from Gartner indicates that the average time between a vulnerability being reporting to the time it is exploited is just 7.72 days in 2017, a dramatic drop from 13.5 days in 2016 and 25.4 days from 2008-2015. The window of opportunity for attackers is bigger than ever.

Similar education is needed regarding product life spans. Responsible companies will generally attempt to patch older products for as long as they can, but at a certain point every device becomes obsolete. Many devices reach a point where there is no longer enough space on the device for the installation of a patch meant for a newer product. The fact is that, the longer a device is on the network, the more vulnerable it becomes. In this way, product longevity can actually become a negative because it can cause vulnerable devices to remain connected to a network long past the date that the manufacturer stops supporting them. Helping IT teams gain a firmer understanding of the intended life span of a product can lessen this problem as well.

How can certifications help?

Another way to help close the knowledge gap is through certifications. These days, certifications are everywhere. From car companies to lightbulb manufacturers, it’s hard to find a consumer product that isn’t certified by some regulatory board or another. But for some reason, IoT devices have largely escaped this excess of certification, resulting in a market that is flooded with devices that can be difficult to distinguish from one another. This is a problem, particularly for customers searching for devices like connected surveillance cameras where evaluating available security options is an obvious priority.

Thankfully, this has already begun to change as more manufacturers embrace the idea of third-party certification. Customers are growing more discerning as they become better informed, and requests for proposals today often specifically ask about certifications and recent audits. Customers increasingly want to verify that they’re working with a responsible company that will stand by their products, manage vulnerabilities and issue patches as needed. These certifications have finally given them a way to do it.

It’s something of a self-fulfilling prophecy; the more responsible companies buy into the idea of third-party validation, the more exposure customers have to that validation and the more trustworthy it becomes. This type of symbiotic relationship benefits everyone, but the lack of network security baseline standards for IoT devices means it will remain an uphill climb in the short term. I am hopeful that an international organization will develop an IoT certification that can be globally recognized, unifying the many regional certifications that enterprises must currently navigate.

GDPR sets the standard

Legislation is another important part of the equation, although the U.S. currently lacks comprehensive breach notification regulations on a federal level. Instead, the U.S. allows individual states to create their own guidelines. The resulting mishmash of laws and statues has created a difficult environment for organizations operating across state lines, as it can be difficult to know when it’s necessary to disclose a breach or vulnerability to users. The National Conference of State Legislatures provides a handy guide that illustrates just how varied these regulations can be.

But fear not, because there is hope. The E.U.’s much-discussed General Data Protection Regulation (GDPR) represents perhaps the most sweeping change to international privacy law in history. GDPR grants individuals greater control over their personal information while unifying Europe’s data protection regulations under a single, easier-to-understand umbrella.

The most relevant section of GDPR for the purposes of vulnerability management is Article 25, which states that companies processing personal data, such as manufacturers of IoT devices, must have appropriate data protection measures in place. Rather than attempt to implement specific security measures that would quickly become obsolete as technology advances, GDPR instead outlines the mindset with which companies must approach the problem.

For now, GDPR only applies to companies operating in Europe, but savvy manufacturers are already anticipating the enactment of similar regulations elsewhere throughout the world. The winds of change are blowing toward greater security, and manufacturers must recognize that in the public mind, they bear the majority of the responsibility for vulnerability management. Integrators and contractors are often overlooked in U.S. regulations, which can put manufacturers in a difficult position, which is an issue that GDPR’s Secure by Default requirement has addressed by allowing integrators to be fined for failure to properly install or configure otherwise secure equipment.

This further underscores the importance of education initiatives and the fact that integrators and contractors must be included in those efforts. Manufacturers are often accused of making their devices too open, an accusation that overlooks the fact that ease of access is one of IoT’s biggest selling points. What’s important is having appropriate controls in place, and manufacturers have expressed frustration with the fact that the integrators putting their products into place have failed to understand how to appropriately apply those controls in the best interest of the customer. After all, what good does it do to design a product with security in place by design and default if the customer is never made aware that the protections exist? By ensuring that integrators have a more intimate understanding of IoT devices, this problem can be mitigated.

Working toward a more secure future

The rapid growth of IoT appears unlikely to subside anytime soon. Innovative new devices will continue to enter the market, providing exciting new tools and resources across a broad range of industries. But with these tools will come security challenges, and manufacturers, integrators and users must all be prepared to do their part to address them.

Legislation will come and certifications will grow in importance, but the key to effective vulnerability management is — and will remain — education. From available security controls to life cycle management, each party has a role to play and each must understand the steps they can take to improve device security.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 9, 2019  5:47 PM

Automating IoT security requires 20/20 vision

Reggie Best Profile: Reggie Best
Internet of Things, iot, IoT cybersecurity, IoT data, IoT data management, IoT devices, iot security, IoT sensors

Across all areas of cybersecurity, the automation trend is being driven by a massive cybersecurity skills shortage, as well as the need for security teams to deliver faster and more consistent performance against internal service level agreements. In many cases, the ability to effectively automate security is the only way security teams can increase the speed and quality of their operations and keep pace with ever-accelerating business requirements and digital transformation initiatives.

When it comes to IoT security, the most important first step in security automation is visibility. The old adage you can’t secure what you can’t see is particularly apt with IoT. Most organizations have no idea how many IoT endpoints are connected to their networks. It is critical that IoT security begins with the establishment of a visibility regime so security teams can bring all IoT endpoints under corporate security policy. To do this, they need technology that can:

  • Identify all IoT and IIoT endpoints.
  • Determine whether or not those endpoints are authorized.
  • Understand whether or not communications from those endpoints are occurring over acceptable network infrastructure and paths.
  • Determine whether or not network security policies are in conformance with corporate and industry guidelines.

Once visibility has been achieved, automation can begin. However, there are still some daunting challenges to overcome. For example, IoT networks are often very large hybrid infrastructures subject to constant change, so maintaining real-time visibility can be difficult. Additionally, IoT devices typically are closed or embedded systems that were never designed with security in mind, so there simply is not the technical capacity for client-based security.  The critical response-time and availability and uptime requirements for many IoT devices make it impossible to interrogate these endpoints.

Quite simply, securing IoT endpoints is a different ballgame from securing traditional endpoints. We’ve established that visibility is the first step to winning in this new ballgame. This is a critical competency right now, because 40% of network endpoints today are IoT devices. This is not some future discussion.

Once real time visibility has been established, the next step toward achieving IoT security automation is enabling data integration through APIs. By aggregating and analyzing IoT data — either in security information and event management or data lake — organizations can then create closed loop automation across their security tools for policy management, asset and infrastructure visibility, and risk and vulnerability management. The endpoints may not be able to protect themselves, but by automating the security tools around them, organizations can implement effective IoT security without burdening security teams with yet another set of manual security management tasks. This frees staff to focus on higher-order thinking, like focusing on security insights and establishing priorities for escalation and resolution or remediation.

As things stand today, IoT security automation is lagging because enterprises are focused on automation in core IT infrastructure areas, not IoT endpoints. This lack of prioritization for IoT security is borne out by a survey from Trustwave that found that more than half of businesses have faced IoT attacks, but only one-third consider IoT security to be very important.

Unfortunately, in this age of attackers moving laterally across networks, considering IoT security as anything less than very important jeopardizes the core IT network. Even if the IoT devices in question are on an operational technology (OT) network in a manufacturing facility, that network is most likely connected to the enterprise IT network. This means attackers can enter the OT network through an IoT device, and then move laterally onto the IT network where they can attack the core IT systems.

This lack of prioritization must change in the near future as IoT environments continue to scale. As mentioned earlier, 40% of enterprise endpoints are currently IoT; it is only a matter of time before that number goes beyond the 50-yard line and most enterprise endpoints will be IoT. At that point, IoT security automation will move from being a whiteboard future to a core security requirement.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 9, 2019  3:58 PM

IoT simplifies manufacturing processes

Sanjeev Verma Profile: Sanjeev Verma
Digital transformation, IIoT, industrial internet of things, Internet of Things, iot, IoT analytics, Manufacturing, Manufacturing applications, smart manufacturing

What honey is to a bee, IoT is to digital transformation. The 21st century has witnessed IoT developments attaining great accomplishments. Here, the industrial internet of things contributes the maximum share of successes. Manufacturers and industrialists belonging to different sectors tend to miss out on lucrative opportunities because the manufacturing processes are extremely complex. The processes tend to confuse businesses and cause irreversible losses. To manifest a favorable reality, manufacturers and industrialists adopt IoT-driven solutions. This is an extremely important step to make smart factories a possibility.

This is the era of Industry 4.0 and digital transformation of manufacturing. The manufacturing industry is gaining the most IoT-driven projects. This is because of the strong demand for customization, heightened customer expectations and complexity of the global supply chain. The competitive market and different forms of challenges push manufacturers across the globe to try their hands on innovative strategies. With the intention to broaden the scope of productivity and simplify manufacturing, manufacturers are trying their hand at digital transformation with IoT solutions as the one stop solution.

In 2017, 60% of the global manufacturers would use data analytics to give an edge to their business operations, according to IDC and SAP. This data will be tracked by connected devices to analyze complicated manufacturing processes and identify the optimization possibilities. IoT technology gives manufacturers many advantages.

Predictive capabilities improve forecasting

It is not easy for a manager to keep an eye on each and every machine meant or expected to contribute to the production process. Tracing the predictive capabilities of each and every machine is crucial for better production at lower costs. Creating a network driven by IoT is crucial in identifying the actual machine performance. It assists businesses in tracing the machine faults in real time. Once a faulty machine gets detected, the manufacturers can immediately send it for repairs or can disown it.

Data provides insights into energy use

Energy is one of the largest expenses of a manufacturing firm. The bills always arrive half a month prior to the end of the billing cycle and highlight the total units of energy consumed. It is obvious that uneven energy consumption causes unnecessary expenses to manufacturing units. The catch here is that the bill just shows the total amount that is to be paid. With IoT tools, the firm paying can see a proper breakdown of the bill. This way, specific inefficiency gets highlighted. Collecting data right from the device level traces different equipment that is underperforming. Next, that machinery could simply be put in the process that will highlight its efficiency. Every piece of machinery present on a floor could be tracked. Managers can gain granular visibility into energy consumption, actionable insights about energy waste and regulatory compliance issues.

It is truly interesting how real-time data can provide insights about off-hour energy consumption, energy saving opportunities and ideas to optimize production schedules. In order to understand machine health, real-time data can benchmark similar pieces of equipment. Real-time data can also proactively solve issues with underperforming machines. It becomes comfortable for managers to evaluate different locations and zoom into hidden operational inefficiencies in short time spans.

IoT upgrades quality analysis and product quality

A manufacturer is always on the lookout for improving the quality of a given product. A product that offers superior quality promises waste reduction, increases customer satisfaction, lowers costs and boosts sales. Achieving this goal is not always easy, but IoT simplifies it through faulty equipment detection due to issues such as improper maintenance or wrong machine model set up that cause obstacles in the production processes.

IoT helps reduce downtime

Profits are the backbone of every organization. The manufacturing units produce quality products to sustain customer loyalty. But quality production is not enough to keep a manufacturing firm above the market competition. Timely, accurate and high-quality production is the core reason behind profit earning. In all these, not having reliable machinery is a glaring risk. If a machine breaks down during the run, it changes the entire production planning of the current batch that is on the go. It also calls in additional downtime expenses. IoT predicts the longevity of machinery limit these costs and downtime in order to spare the manufacturers from such nightmares.

IoT builds smart factories

Accurate information about products, production plants, networks and systems are in demand. All these are provided by IoT. Its real-time capabilities enable manufacturers and industrialists to take the best course of action to achieve business goals without wasting time or money and to sail through glaring complexities. IoT highlights the hindrances that stop manufacturing processes and helps manufacturers make crucial performance-based decisions. These decisions strengthen the business performance.

The manufacturing industry strives day in and day out to meet the customer’s expectations or specifications better while operating with higher profitability. IoT aims at minimizing human interference while boosting machine intelligence, thus delivering competitive advantage that turns a regular manufacturing unit into a smart factory.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 8, 2019  3:08 PM

What are the biggest problems IoT data scientists face?

Alex Fly Profile: Alex Fly
Data scientist, Internet of Things, iot, IoT analytics, IoT data, IoT data management

IoT is creating innumerable opportunities for businesses through the intelligent use of data curated by data scientists. In 2019, both LinkedIn and Glassdoor ranked data scientist as the most promising job due to the exponential popularity and relevance of big data, data mining and IoT. Although the job might indeed be attractive, the role is undergoing rapid change. This sensor-driven influx of data poses new challenges to the professionals tasked with converting the information into actionable insights.

According to studies, it is estimated that by 2020, 1.7 MB of data will be generated per person every second of the day. Over 26 billion IoT devices generate this data. IoT data scientists are hired by companies and cities to distill this data into an incisive course of action.

As once-static data evolves to become more dynamic and processing takes place in real-time, IoT data scientists are faced with new hurdles that impact every level of the organization. To better understand how to transform IoT data into a successful strategy, data scientists must address the real-world challenges faced in this arena. Here are the top four challenges IoT data scientists currently face.

Managing expectations

An effective data science department depends on the breadth and depth of the manager’s experience. The depth of experience comes from working with alternative situations and helming projects from the ground up. The breadth of a manager’s experience requires a background with the entire landscape of technology, programs and variable outcomes.

For IoT data scientists, the limited lifespan of the technology coupled with the hyper-advancements in both hardware and software reduces the capability to have adequate depth and breadth. This sharp curve leads IoT data scientists to enter projects with unclear parameters, vague goals and an overabundance of data. The result is rogue data with unnecessary hours spent on data cleansing.

New technology is exciting, especially to a boardroom looking to maximize ROI and streamline processes wherever possible. However, IoT is still in its infancy, and the number of devices, wearables, appliances and vehicles equipped with sensors and IP addresses is expanding at a rate beyond compare. To battle this discrepancy, businesses must invest in the architecture of systems and processes to support their IoT data scientists.

Language barriers

Like all scientists, IoT data scientists operate with a unique language. However, unlike most scientists, data scientists’ work is not confined to a lab. The results of their insights have implications affecting all realms of the businesses they work within: production, operations, distribution, customer service and R&D. Terms like actuator, chirps, mesh network and Z-wave are all imperative to an IoT data scientist, but likely sound like jargon to stakeholders. At the same time, developer languages like C, Python and Java are defining the possibilities and limitations of IoT software. Between the hardware and the software, IoT data scientists are faced with communicating highly complex realities to business partners.

In order to successfully communicate actionable strategies within the business, a degree of industry translation is required. Highly analytical data scientists might not have the business acumen to express and demonstrate the importance of their research, but companies determined to operate on the bleeding edge must strive to keep all parties IoT-literate as a way to mitigate misunderstandings through the language barrier.

Building the dream team

While companies are eager to mine their data and begin improving business, hiring an IoT data scientist is not a one-step solution to the problem. An effective IoT data scientist requires sufficient support and direction in order to operate efficiently. Additionally, data scientists’ resumes are notoriously filled with a litany of tech skills, but hiring managers are increasingly growing aware of the importance of soft skills such as problem-solving, communication and teamwork.

To curb this dilemma, companies must develop a framework of responsibilities for their IoT data scientists and simultaneously prepare hiring managers to screen for soft skills as well as IoT technical expertise. A whiz analyst won’t do any good working with a faulty system, and neither employee will succeed if they can’t execute teamwork and effective communication.

Perhaps most importantly, hiring managers must form a team with complementary technical skills, drawing on elements of engineering, analysis, infrastructure and quality control.

 Data overload

IoT data scientists have access to abundant data such as speed, temperature, location, rate and proximity, which is gathered from an enormous number of interconnected sources in real-time. While the opportunities are endless, one of the hazards is data overload.

Data overload impacts data scientists, and it can also become burdensome to the network and raise questions about data storage. To overcome data overload, invest in proper data management architecture that performs pre-processing by applying cleaning algorithms. The system should leverage machine learning tools to create iterative improvement of the data, which produces more valuable insights as time passes.

Additionally, consider developing system-level solutions that automate the distribution of information to a necessary destination. If a sensor alerts a low level of fuel, rather than sending that information to an IoT data scientist, program that information to notify the relevant employee responsible for the machine.

The IoT data scientist

Sensors are capable of gathering more information than ever before and distributing the data immediately. As a result, IoT is becoming the testing field for the world’s most finely-tuned technological advancements, and companies — large and small — are reaping the benefits. IoT data scientists are gatekeepers to the future of business. However, a successful operation requires business leaders to grasp the struggles of the emerging field firmly. If the goal is practical insights, start with the knowledge that IoT data scientists need extra support, flexible resources and, more than anything else, room to grow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 7, 2019  5:54 PM

Where is the edge in edge computing?

Gordon Haff Profile: Gordon Haff
Edge computing, edge devices, iot, IoT devices, IoT edge, IoT edge computing

The center of gravity  for computing has expanded and contracted over the years.

When individual computers were expensive, users congregated around those scarce resources. Minicomputer departmental servers and especially PCs kicked computing out towards the periphery. Cloud computing on public cloud started to pull compute inwards again.

However, a variety of trends — especially IoT — are drawing many functions out to the periphery again.

The edge isn’t just the very edge

The concept of edge computing as we understand it today dates back to the nineties when Akamai started to provide content delivery networks at the network edge. We also see echoes of edge computing in client-server computing and other architectures that distribute computing power between a user’s desk and a server room somewhere. Later, IoT sensors, IoT gateways and pervasive computing more broadly highlighted how not everything could be centralized.

One thing these historical antecedents have in common is that they’re specific approaches intended to address rather specific problems. This pattern continues today to some extent. Centralize where you can and distribute where you must remains a good rule of thumb when thinking about distributed architectures. Today there are far more patterns and complexity.

Why must you sometimes distribute?

Centralized computing has a lot of advantages. Computers are in a controlled environment, benefit from economies of scale and can be more easily managed. There’s a reason the industry has generally moved away from server closets to datacenters, but you can’t always centralize everything. Consider some of the things you need to think about when you design computer architecture, such as bandwidth, latency and resiliency.

For bandwidth, moving bits around costs money in networking gear and other costs. You might not want to stream movies from a central server to each user individually. This is the type of fan-out problem that Akamai originally solved. Alternatively, you may be collecting a lot of data at the edge that doesn’t need to be stored permanently or that can be aggregated in some manner before sending it home. This is the fan-in problem.

Moving bits also takes time, creating latency. Process control loops or augmented reality applications may not be able to afford delays associated with communication back to a central server. Even under ideal factors, such communications are constrained by the speed of light and, in practice, can take much longer.

Furthermore, you can’t depend on communication links always being available. Perhaps cell reception is bad. It may be possible to add resiliency to limit how many people or devices a failure affects. Or it may be possible to continue providing service, even if degraded, if there’s a network failure.

Edge computing can also involve issues like data sovereignty when you want to control the proliferation of information outside of a defined geographical area for security or regulatory reasons.

Why are we talking so much about edge computing today?

None of this is really new. Certainly there have been echoes of edge computing for as long as we’ve had a mix of large computers living in an air-conditioned room somewhere and smaller ones that weren’t.

Today we’re seeing a particularly stark contrast. The overall trend for the last decade has been to centralize cloud services in relatively concentrated scale-up data centers driven by economies of scale, efficiency gains through resource sharing and the availability of widespread high-bandwidth connectivity to those sites.

Edge computing has emerged as a countertrend that decentralizes cloud services and distributes them to many, small scale-out sites close to end users or distributed devices. This countertrend is fueled by emerging use cases like IoT, augmented reality, virtual reality, robotics, machine learning and telco network functions. These are best optimized by placing service provisioning closer to users for both technical and business reasons. Traditional enterprises are also starting to expand their use of distributed computing to support richer functions in their remote and branch offices, retail locations and manufacturing plants.

There are many edges

There is no single edge, but a continuum of edge tiers with different properties in terms of distance to users, number of sites, size of sites and ownership. The terminology used for these different edge locations varies both across and within industries. For example, the edge for an enterprise might be a retail store, a factory or a train. For an end user, it’s probably something they own or control like a house or a car.

Service providers have several edges. There’s the edge at the device. This is some sort of standalone device, perhaps a sensor in an IoT context. There’s the edge where they terminate the access link. This can often be viewed as a gateway. There can also be multiple aggregation tiers, which are all edges in their own way and may have significant computing power.

This is all to say that the edge has gotten quite complicated. It’s not just the small devices that the user or the physical world interacts with any longer. It’s not even those plus some gateways. It’s really a broader evolution of distributed computing.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: