IoT Agenda


August 23, 2019  11:15 AM

Grab the right IoT tool for the job

Richard Beeson Richard Beeson Profile: Richard Beeson
Internet of Things, iot, IoT analytics, IoT data, IoT data management

Do you grab just a Crescent adjustable wrench or two, or do you need to haul around an entire set of combination or open-end wrenches in both metric and imperial sizes?

More topically in the world of IoT, what is the best tool for the job when it comes to data management, a general-purpose tool or one highly tuned to addressing a specific domain of needs? In IoT, there are streams of transactions, streams of sensor data, data about the device or meta data and context about the streams and environment of the IoT or smart device. The challenge with this data is in the streams which potentially represent extreme velocity and volume — though typically not variety — in an often communication-constrained or power-constrained environment.

It is a whole lot easier to grab your handy Crescent Wrench when you are hiking 10 minutes in the heat of the day to your dock to fix your ladder since you cannot remember what size bolts it has. You have an adaptable solution in your pocket. It may not fit in the tight spots and it may cause some extra wear on the bolt, but it gets the job done. Though, in this use case around water, you might not want to rough up the bolt. You might need the best tool for the use case which in this case means a well-fitting static wrench.

In the world of streaming data in IoT, the goal is to maximize the amount of information, not just data, available about the device within the capabilities of the system. To do this, you need to understand the “physics” of the underlying system to know  the difference between data and information. Let’s consider the following example.

Data is not always information

A graph of data

Data from a thermocouple. Source: Richard Beeson, OSIsoft

Let’s start with a set of numbers including 98.2, 98.7, 99.2, 98.9 and 98.6. These values were established at t1 through t5 respectively. Which of these numbers represent information?

From the point of view of an operations engineer, if this data was from a thermocouple, you could argue that if the times are equally spaced, then only the first, third and fifth numbers convey information. There is information in the second and fourth numbers, not about the values, but the fact of measurement. If that fact is otherwise captured because there is no indication of measurement failure, then only three measurements are needed to capture the available information. There is even more we can do to trim the data without losing information, including leveraging generic lossless compression after all knowledge of the underlying physics is exploited.

Information from a thermocouple.

Information from a thermocouple. Source: Richard Beeson, OSIsoft

The validity of this relies on some important assumptions tied to the physics of the situation. For example, the sample frequency or interval between tn and tn+1 should be well below the time constant of the underlying process. Second, it assumes that the thermocouple is both precise for this scale and accurate to represent that actual temperature. Assuming both are valid, then any calculations or statistics derived from any value taken from any time on this plot are valid and appropriate for any statistical or operational application.

Information from a bank account. Source: Richard Beeson, OSIsoft

Let’s look at another series of data: 98.2, 98.7, 99.2, 98.9, 98.6. These values were established at t1 through t5 respectively. I know what you are probably thinking here — is this a cut and paste error?  No, this is fundamentally different information, because in this case the values are automated daily deposits to a savings account based on a percentage of your balance. Each piece of data in this series conveys important information and throwing out any values would be throwing out real money. Furthermore, these are discreet measurements, not part of a continuous system. The “physics” of a thermocouple and a bank deposit are completely different. To someone without the context, they look exactly the same.

For decades, data management systems, referred to as historians, have been optimized to leverage the knowledge of continuous signals that typically come from sensors designed to optimize the storage and subsequent use of information in operations. Properly tuned, a historian can capture and serve all relevant information for critical operations and IoT with a fraction of the data and resources that might otherwise be needed. This allows for a more efficient, timely and extensive picture of the physical operations. With communication constraints and data volumes in the zettabytes, we must remember that we don’t live in a world of infinite resources. For financial transactions, use a highly redundant, atomicity, consistency, isolation and durability transactional database, where every piece of data is information.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 22, 2019  4:11 PM

How distributed ledger technology can serve megacities of the future

Christopher Justice Profile: Christopher Justice
distributed ledger technology, DLT, iot, IoT devices, IoT infrastructure, Smart cities, smart city

According to the United Nation’s Economic and Social Affairs Department, one in eight of the world’s population currently lives in 33 megacities — cities with populations over 10 million — with over half living in these and less populous cities. To improve the quality of life for all inhabitants, cities are having to extend their digital infrastructures, a trend termed smart cities.

The International Data Corporation estimates that smarty city spending will reach $158 billion by 2022, and the driver for this is IoT, the network connectivity embedded into everyday objects.

An in-depth look at smart cities

Cities are the pre-eminent arena for economic growth globally, and they house an increasing proportion of the world’s population. By 2050, it is estimated that 68% of the world’s population will live in cities, and these megacities will be producing more than 80% of the global gross domestic product.

Urbanization has been the key trend of the 20th and 21st century. Because of this, the evolution toward a more data-centric approach to city life is inevitable because information and technology provide us with the capabilities to do more in our cities.

Today, cities like Dubai enable access to their data warehouses. These smart cities invite corporations, citizens and entrepreneurs to utilize data, come up with innovative solutions and make it a more sustainable and attractive place to live and work.

New ways of gathering and using data while protecting privacy will be required to address the challenges of urbanization, such as resource management of energy, water, land, healthcare, transportation and waste. This data will flow through IoT infrastructures, consisting of identifiable connected devices. These devices are lightweight and can collect and transmit data over networks.

Success in this arena will depend on smart cities being able to capture vast amounts of data, whether it is raw, clean or authenticated. The biggest challenge to achieve such success will be unifying and standardizing these data sets, while maintaining their integrity and consistency. Smart cities must also avoid erroneous data capture and promptly deal with device downtime and damage.

Systems that field data

The two systems organizations currently use to field the data collected from smart cities are centralized cloud compute infrastructures and distributed systems. In distributed systems, different computers in different locations work together to process the data, rather than processing data at one centralized location.

IoT devices connected to a distributed ledger technology (DLT) infrastructure can run business logic rather than sending everything back to a centralized cloud computing model. This is imperative because the centralized cloud computing model will not be able to cope with the demand from the increasing number of devices connected to the Internet. A Statista study predicts a growth to as many as 75 billion devices in approximately the next five years.

A scalable DLT solution will be needed to support the vast amounts of data propagated throughout city networks. DLT can provide the substrate for a secure, transparent peer-to-peer digital infrastructure for smart cities, where data management will need to provide open innovation, collaborative environments and mutually beneficial relationships.

Incentivizing change

With the right kind of DLT technology, data can be shared much more securely and easily than centralized data storage databases, which reflects a shift from the current closed value chain toward an open value ecosystem.

Another advantage of the distributed model is that where the centralized model has a few big entities controlling most of our data, here the data is controlled and value is created by the data-providers themselves — you and me. With properly structured incentivized methods and privacy controls, we can allow smart city inhabitants to contribute towards smart city initiatives. Currently, private companies are controlling individuals’ personal data at an increasing rate. As public perceptions change and people become aware of how their data is being used, they are pressing for change.

Such a change would lead to people having greater control of how their data is used, who is using it and for what reason. The way in which data is handled throughout its lifecycle is what makes the difference. As an example, we can look at supply chains and waste management, which are aspects of product life cycles that are integral to the economic function of any city.

A traceable solution

Organizations can authenticate products by adopting a DLT solution to track and trace the provenance of a product from its source and using a unique identifier to register it. If we have unique identifiers on plastic bottles, we can incentivize recycle schemes by rewarding people when they return it to recycling bins. Schemes like this are already popular in the U.S. and Europe and will soon be enforced in the U.K. For example, Atlas City collaborates with CryptoCycle to implement Reward4Waste, a deposit return scheme for single use plastic beverage containers. People have signed up to give higher capture rates, while reducing costs and rewarding the consumer for their environmentally friendly action.

The concept of a smart city goes beyond the built environment. It relies on its citizens to be technically aware and able to participate. The key will be the ability to change behaviors while also providing better services, lower costs and a better environment. Interoperability across various technologies will be crucial in enabling smart cities to be truly smart, and DLT will likely be one component of this.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 22, 2019  10:32 AM

How IoT and AI are fueling the autonomous enterprise of the future

Mike Leibovitz Profile: Mike Leibovitz
AI and IoT, Automation, Internet of Things, iot, IoT in retail, IoT verticals

IoT technology has reached critical mass. Today, there are more than 10 billion IoT devices in use around the world. Approximately 127 new devices are connected every second, according to PYMNTS’ monthly Intelligence of Things Tracker report. That’s more than 2,000 new connections since you started reading this article.

While much of the IoT conversation focuses on the devices themselves, the true potential of IoT extends well beyond hardware. Instead, it’s in the data a device generates, the action it instigates and the ultimate value it delivers. For example, sensors deployed in a grocery store are important because they send real-time data about stock levels to store employees so they can manage inventory accordingly.

As the volume and sophistication of connected technology increases, IT leaders must ensure devices, architecture, automation and human intelligence are working in harmony to create superior end-user experiences. This is a framework known as the autonomous enterprise. Let’s explore how to combine IoT technology with AI and automation to create the autonomous stores, classrooms and smart cities of the future.

Modernizing retail stores

From buy-online-pickup-in-store options to mobile point-of-sale systems, new IoT technology makes the check-out process and in-store experience more seamless and convenient for customers.

Increasingly, retailers also experiment with video analytics and sensors to automate inventory management and track product sell-through rates. For instance, grocers use IoT devices, such as temperature sensors, to preserve cold and frozen goods, ensure food safety and minimize spoilage.

Store managers can also deploy shelf sensors to analyze which products a shopper removes from a shelf, and then replaces without purchasing. This data is valuable for store owners and brands because it provides insight about consumer shopping behaviors and helps answer questions about why a product has a strong or poor sell-through rates. Did consumers engage with your product, such as by picking it up, or were they more attracted to a different, neighboring product in the store?

Furthermore, automated real-time notifications about inventory levels can ensure shelves are adequately stocked for shoppers, and it can save employees time on manual inventory monitoring tasks so that they can be more readily available for customers.

The potential for IoT also extends to the backend of retail operations, and helps optimize companies’ supply chains. Kroger is among the leading innovators in the grocery retail category, deploying IoT and automation technology such as robotic carts and robust warehouse management systems in its distribution and fulfillment centers. These technologies help employees expedite picking and packing processes. The combination of technology and human intelligence increases the efficiency of retail operations and helps deliver exceptional customer experiences.

Powering digital classrooms

Previously, I highlighted how classroom environments are becoming increasingly digital. Smartboards, robotics and video live-streaming technology are just a few examples of the kinds of IoT technology used in schools today. According to a Deloitte survey, 80% of teachers use digital education tools at least once a week.

However, the influx of devices presents new challenges for IT teams and school networks. A robotics lab supporting a STEM lesson plan or a video live-stream broadcasting to remote students are both bandwidth-intensive. When Wi-Fi traffic is not properly managed during these activities, students and teachers will likely face connectivity challenges, and it will cause significant network latency for the rest of the campus. In turn, network downtime in schools impedes educators’ curriculum, and disrupts digital-dependent learning such as online testing or smartboard lectures.

School administrators can supplement their Wi-Fi network with AI functionality to improve radio frequency efficiency, and expand wireless capacity to meet new bandwidth demands across the campus automatically and on-demand. Specifically, AI can seamlessly optimize the wireless network in changing environments such as the cafeteria or gymnasium where RF characteristics can vary significantly depending on the number of people and devices present. With strategic automation, students, faculty and staff can rely on consistent connectivity.

In addition to cultivating immersive learning environments for students, the combination of automation and IoT devices can play a critical role in supporting day-to-day operations and reinforcing school safety. For example, Forsyth County School District recently announced plans to deploy 600 cameras with advanced analytics and facial recognition capabilities to enhance attendance tracking and increase campus security.

Automating the administrative task of manually taking attendance enables teachers and students to maximize time spent on learning. Meanwhile, automated facial recognition and video analytics can help to identify potential intruders and unauthorized visitors that may threaten school safety.

Fortifying smart cities

By 2050, 70% of the world’s population will live in urban areas, according to the Organization for Economic Cooperation and Development. As urban density increases, city planners and CIOs are looking to smart, connected technology such as digital signage, traffic cameras, stoplight timers and roadway sensors to help alleviate congestion, prevent traffic accidents, and improve overall living conditions for citizens.

This IoT technology is already driving meaningful results. In 2018, McKinsey Global Institute found that various smart city applications could reduce fatalities by 8% to 10%, reduce the average commute time by 15% to 20% and cut greenhouse gas emissions by 10% to 15%.

While smart city innovation helps the urban population in many ways, it also introduces new challenges for city CIOs. Namely, managing hundreds of thousands of disparate systems and IoT devices while preventing bad factors from infiltrating critical functions such as traffic control and water and energy management.

To enhance security and to ensure citizens don’t lose access to these critical public services, municipalities can leverage real-time network analytics so they can have a clear understanding of device usage patterns. In turn, they can use AI and machine learning technology to monitor network traffic by application and device type, detect anomalies and quickly mitigate potential security incidents.

Fueling the autonomous enterprise

As IoT technology becomes more entrenched in our everyday lives, industry-leading organizations understand that the devices are not the end game. Rather, when IoT technology, architecture, automation and human intelligence work together in harmony, IT leaders can drive operational efficiency, reduce time spent on mundane, administrative tasks and fortify network security to deliver enhanced end-user experiences. This is the autonomous enterprise vision that we’ll continue to see come to life in stores, schools and cities.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 22, 2019  9:44 AM

The benefits and pitfalls of engaging in IoT research

Mitch Maimam Profile: Mitch Maimam
Internet of Things, iot, IoT deployment, IoT design, IoT development, research and development

While there are already many connected products in the marketplace, there remain many opportunities for developing new IoT products and technologies. The greatest challenge for companies looking to develop an IoT product is how to apply a R&D process that will result in viable commercial and consumer-facing IoT products.

Today, most implementations of full stack IoT solutions involve the integration of existing technologies that are well-known. Some of these technologies have been heavily commercialized for years, and are at a cost point attractive enough for commercial and consumer applications. Other technologies are developed and available, but have not dropped down the cost/performance curve to the point where they are suitable for consumer-facing IoT solutions. With these gaps in the available technology, research into these new domains of IoT is warranted and has the potential to uncover new viable IoT solutions.

First, I will give an in-depth look into R&D.

Research

By definition, research projects are much more fundamental. The outcomes might prove a product idea to be either achievable or unachievable. Cost risks are higher as the plan for realization can be unpredictable or without a known goal. Schedules for project completion are also usually fuzzy. Hence, research projects are high risk.

Development

These are projects where achievement of the objective is possible using known technologies or simple technology extensions. This is not to minimize the cost of execution, which can be substantial. However, the risks of satisfying the project goals, costs and schedules are manageable.

Though creating a high quality, reliable IoT solutions is a significant investment and takes time to realize, these are typically development projects. When significantly different technology is required to realize the product, the project is generally in the research category.

IoT research pitfalls

IoT research projects have many risks, and our experience has shown that they can fail for a variety of reasons. I have outlined a few below:

  • The physics of the achieving the technological goal might be impossible.
  • Achieving the technology goal might be possible but at an ingredient cost point that precludes a commercially- acceptable product price point.
  • The time to achieve an outcome can be highly unpredictable. This might mean it not only takes longer to develop a technology, but also the clock might not be in the favor of the team investing in the technology. A competing technology from another source might arise. If the time window slips, the business opportunity might vanish or be significantly reduced.
  • The technology might be available, but the industrial capacity is being fully absorbed by a few major companies. This means the technology is available, but might not be available to anyone other than the largest players willing to commit to huge volumes of component parts. As an example, do you want the newest and latest display used by Apple in the newest — or soon to be released — iPhone? Great. Realize Apple might have absorbed the capacity of the supply chain for those displays. Expect shortages of supply, which translates into both cost and unpredictable delivery schedules. It happens.

When considering investment in IoT research, it’s important to narrow the focus of the research area. It is certainly possible to do fundamental research at the academic level, but the more relevant and important research needed to define which products — if ultimately successful — will fulfill a current or anticipated marketplace need.

IoT research elements for success

For a successful IoT-oriented research program, organizations must ensure that certain elements are in place. I have outlined a few below:

  • Inclusion of discovery around technology that lends itself to commercial scale manufacture, and at an attractive price point.
  • Organization of research projects with frequent and clearly defined checkpoints where the goals are reviewed, progress is assessed and the go-forward cost and schedule are assessed along with spending to date.
  • Frequent reviews of the known and anticipated competitive landscape. For example, to avoid day late and a dollar short syndrome.
  • Willingness to accept risk and to be clear sighted and open to the possibility of failure. While teams need to be tenacious and motivated for success, one has to understand that — at some point — it might be necessary to admit defeat.
  • The value proposition for success must be sufficient to warrant the significant cost, unpredictable schedule and risk of outright failure. If this cannot be clearly assessed, it might not make sense to make the investment.

Nowhere more than in the domain of research does an IoT oriented company need to keep its eyes wide open. It must not only accept risk, it also has to support and celebrate the failures for the knowledge they provide to inform future efforts. However, those successful in managing research are able to overtake competition and get a sustainable edge in being first to market.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  3:48 PM

Mapping the device flow genome

Greg Murphy Profile: Greg Murphy
Internet of Things, iot, IoT data, IoT data management, IoT device management, IoT devices, iot security

The explosion of connected devices has given rise to today’s hyper-connected enterprise, in which everyone and everything that is fundamental to the operation of an organization is connected to a network. The number of connected devices runs into the billions and is growing exponentially in both quantity and heterogeneity. This includes everything from simple IoT devices, such as IP cameras and facilities access scanners, to multi-million-dollar functional systems like CT scanners and manufacturing control systems. With the sudden surge of disparate and complex devices all tapping into various enterprise networks, it is little surprise that hyperconnectivity is becoming an incredibly complex and increasingly untenable problem for IT and security groups to address. This is especially true for device-dependent Global 2000 organizations, major healthcare systems, retail and hospitality operations or large industrial enterprises.

A complex problem like hyperconnectivity cannot be solved without first establishing a baseline of understanding. For example, in the medical community, development of targeted therapy for many serious diseases was comparatively ineffective before the mapping and sequencing of more than three billion nucleotides in the human genome. The Human Genome Project, a 15-year collaborative effort to establish this map of human DNA, has enabled the advancement of molecular medicine at a scale that was once impossible.

Similarly, IT, security and business leaders cannot address the myriad challenges of the hyper-connected enterprise without fully mapping the device flow genome of each network-connected device and system. Much like DNA mapping, mapping the device flow genome is a significant challenge, but well-worth the effort for the intelligence it provides.

The challenge of mapping a system is enormous, because it requires complete understanding of both the fixed characteristics of each device, as well as the constantly changing context in which it operates. To do this at scale, network operators must be able to apply sophisticated machine learning to accurately classify each device and baseline its dynamic behavior along with the context of the network.

If operators can do that, they can immediately identify potential mutations in the genome — devices that are not behaving the way they should — and mount an appropriate response to ensure business continuity and prevent catastrophic downstream consequences. At the time, they can leverage artificial intelligence to define and implement actionable policies that prevent future recurrences. That is’ the only reliable way to protect critical assets and deliver true closed loop security in the hyper-connected enterprise.

Mapping vs. fingerprinting

Traditionally, solutions seeking to identify and potentially classify devices on a network utilize static device fingerprinting, which can discover a device’s IP address, use Mac address lookup to identify the device manufacturer, and apply other rudimentary techniques to build a generic profile of the device. Fingerprinting answers some important but very basic questions: How many devices are connected to the network? To which ports and VLANs are they connected? How many of these devices are from Manufacturer X?

To gather more specific information, it has typically required agents to be installed on each endpoint. In the hyper-connected enterprise, that is simply not possible because the scale and heterogeneity of these devices quickly breaks traditional IT and security models. Instead, by fully mapping the device flow genome automatically — without any modifications to the device or the existing enterprise infrastructure — an operator will have identified details that lead to actionable insight.

As an example, a fingerprinting solution might — at its optimum — enable a hospital to identify the number of heart rate monitors connected to its network. Mapping the device flow genome would not only identify those heart rate monitors, but also provide the information that six of them are subject to an FDA recall, two of them are running an outdated OS that makes them incredibly vulnerable to ransomware, and three of them are communicating with an external server in the Philippines. All of which are major red flags.

This level of granularity is necessary and attainable for every device: IP cameras, HVAC control systems, access badge scanners, self-service kiosks, digital signage, infusion pumps, CT scanners, manufacturing control systems, barcode scanners, and more. Even the devices that find their way into an environment without operator knowledge, such as Amazon Echo and Apple iPad. The quantity and variety of these devices is almost unimaginable in the enterprise today, and it is’ going to grow by orders of magnitude in the near future.

Identify and take control

Once the valuable data has been garnered from mapping the device flow genome, operators will have a sophisticated level of detail on what’s connected to their networks, what each device is doing and should be doing. That information, analyzed and applied appropriately, should enable hyper-connected enterprises to take control of their vast array of devices to ensure effective protection today and over time.

AI-based systems will enable enterprises to deploy powerful policy automation to regulate the behavior of every class of device so none are able to communicate in any manner — either inside or outside of the network — that exposes them to risk and vulnerability. From there, enterprises can fully secure each class of device by implementing micro-segmentation and threat remediation policies with sophisticated and actionable AI.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  3:24 PM

IoT brings the physical and digital worlds together

Dipesh Patel Profile: Dipesh Patel
Enterprise IoT, global IoT, IoT analytics, IoT data, IoT data management, IoT in retail, iot security, smart buildings

It’s becoming increasingly clear to me that IoT is the start of something entirely new, rather than an end state in itself. The real prize is in how IoT — and a global host of connected devices — will add new context to data already being gathered through existing digital, analogue and manual means. However, taking advantage of this opportunity has not been easy. According to McKinsey, as much as 70% of IoT projects remain stuck in the proof of concept phase, rather than moving to deployment. IoT adopters need to be able to show real business value as well as how IoT solves a particular problem, and it all comes down to obtaining a complete picture of the data.

To make this happen, we need to bring the physical and digital realms into close harmony. We also need to ensure that there is clear transparency and consent when obtaining customer data. As we analyze data, we must ask ourselves whether the data comes from IoT devices, or from digital engagement. Privacy and security must be treated as first class citizens and not as an afterthought for IoT to thrive. All of this is a complex technology task, but one that is surmountable.

Transforming business outcomes through IoT data

A real-world example of the data-driven opportunity is in the retail space, where the combination of IoT-enabled physical stores and shoppers’ online buying preferences are opening new possibilities. These IoT-enabled stores are becoming more prevalent as retailers look to drive omnichannel personalized experiences, seamless checkout and tailored offerings for their shoppers. Value comes from the ability to combine previously siloed, in-store IoT real-time data with a shopper’s digital engagement, such as the store’s mobile apps and loyalty programs, to provide a more holistic experience. That experience includes delivering coupons tailored to shoppers’ buying histories, providing personalized recommendations from an in-store associate and optimizing inventory and product availability.

At the end of the day, it is about interacting with the shopper through their preferred channels, giving them unique experiences and providing tailored offers that drive loyalty, which ultimately leads to repeat business. According to an Arm Treasure Data study, nearly 50% of shoppers would consent to companies using their data if it meant getting the right rewards and incentives.

This is just one example of a real-world scenario in retail. Another example is a building owner bringing together data from HVAC, security, lighting and IoT devices to obtain a unified view of building operations to drive cost savings and enhance customer experiences. The value is replicable across industries as data silos are removed and separate data sets are brought together.

Making data security and privacy a priority

Bringing the physical and digital worlds together paints a far richer data picture. It also means there needs to be an added emphasis on security and privacy whenever data is involved, whether that is a retailer delivering more personalized customer experiences or a property manager using IoT technologies to better understand use of their commercial building space.

Security is vital as adversaries continue to get more advanced in their attack methods, and the cost of cybercrime for organizations continues to grow. Data security starts from the ground up, with IoT devices built and tested on secure frameworks. Organizations should look for IoT data management solutions that support secure management of the physical IoT device and its data throughout the lifecycle, and securely unify a broad set of enterprise digital data with IoT data.

Privacy concerns on how data is being collected, used and stored make transparency and consent critical. They must be addressed. For example, in the retail scenario described above, consent can be obtained via an opt-in through a store’s mobile app or loyalty program. The data management solution should provide tools and features to enact and manage leading privacy capabilities within applications that collect, store and utilize data.

Unlocking new possibilities with IoT data

The combination of physical IoT and digital information presents a wealth of opportunity for organizations across industries to transform their businesses. Organizations should look at IoT solutions that enable them to securely unify, store and analyze all of this data to deliver actionable insights. Ultimately, the true value of IoT will be achieved if data can be harnessed to solve real business challenges at scale, while also keeping security and privacy at the forefront.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 21, 2019  2:51 PM

IoT edge devices need benchmarking standards

Jason Shepherd Profile: Jason Shepherd
Edge computing, Internet of Things, iot, IoT devices, IoT edge, IoT edge computing, IoT standards

This is the third part of a four-part series. Start with the first post here.

In the first two installments of this four-part series I discussed what edge computing is and why it matters. I also outlined some key architectural and design considerations based on the increasing complexity of hardware and software as you approach the device edge from the cloud. In this installment, I’ll dig even deeper.

Infrastructure size matters

Cameras are one of the best sensors around, and computer vision — applying AI to image-based streaming data — is the first killer app for edge computing. Only someone who wants to sell you wide area connectivity thinks it’s a good idea to blindly send high-resolution video over the Internet. A smarter practice is to store video in place at your various edges and review or backhaul only when meaningful events occur.

For any given use of edge devices, a key value that a provider can give customers is pre-validation and performance benchmarks on workloads. This ensures customers purchase the right-sized infrastructure up front and get reliable performance in the field. Surveillance for safety and security is fairly constrained in terms of variables, such as the number of cameras, resolution and frame rate and footage retention time. The marketplace for camera makers and video management providers is well-established. The combined constrained variables make benchmarking appropriate infrastructure sizes relatively straightforward.

As new tools and the rise of edge computing enable scale for the use of computer vision, applications become less constrained than surveillance, and data regarding behavior and performance needs is harder to come by. For example, in a brick and mortar retail scenario, the compute power needed to identify basic customer demographics — such as gender or age range — with an AI model is different than what’s needed to assess individual identity. Retailers often don’t have power or cooling available in their equipment closets, so they must get creative. It would be valuable for them to know in advance what the loading requirements will be.

Users’ needs are likely to grow over time with the consolidation of more workloads on the same infrastructure. It’s important to deploy additional compute capacity in the field and invest in the right modular, software-defined approach up front, so you can readily redistribute workloads anywhere as your needs evolve.

Fragmented edge technology makes benchmarking trickier

In more traditional telemetry and event-based IoT, measuring efficacy and developing benchmarks is especially tough due to the inherent fragmentation near the device edge. Basically every deployment tends to be a special case. With so many different uses and tool sets, there are no established benchmark baselines.

I often draw edge to cloud architecture outlines left to right on a page because it fits better on slides with a landscape layout. Someone a few years back pointed out to me during a presentation when I was talking about many of these concepts that the cloud on the right is like the East with the longest legacy of refinement and stability, whereas the edge on the left is the Wild West. This pretty much nails it, and this is why it’s so important for us to collaborate on open tools like EdgeX Foundry that facilitate interoperability and order in an inherently fragmented edge solution stack. It takes a village to deploy an IoT solution that spans technology and domain expertise.

In addition to facilitating open interoperability, tools like the EdgeX Foundry framework provide bare-minimum plumbing to not only serve as a center of gravity for assembling predictable solutions, but also facilitate stronger performance benchmarks regardless of use.

Tools should fit a standard for IoT edge interoperability, so IT pros can focus on value instead of reinvention. An IoT standard would also create benchmarking for infrastructure sizes, so customers can better anticipate their needs over time.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  5:14 PM

Protect IoT bare dies and wire bonds for high reliability

Zulki Khan Profile: Zulki Khan
Internet of Things, iot, IoT devices, IoT hardware, IoT PCB, PCB design, Printed circuit boards

As IoT devices gain greater acceptance within mission critical industries such as industrial, medical and aerospace, product integrity and reliability are of the highest order. Bare chip or die protection is at the top of the list for ensuring IoT reliability and product integrity.

IoT devices — in most instances — are based on factors requiring a combination of conventional surface mount printed circuit boards (PCB) and microelectronics manufacturing, which creates hybrid manufacturing. IoT device PCB microelectronics manufacturing most often requires dies to be placed on a PCB, which can range from rigid, flex or a combination of rigid-flex circuits. In some cases, dies can also be placed on a substrate.

Why protecting die and wire bonding is important

Protecting a bare die and its associated wire bonds is critical to assure mechanical sturdiness and avoid moisture, thus maintaining a high degree of reliability for the IoT user. PCB microelectronics assembly requires a very delicate, fine wire that is generally made with gold. The typical wire gauge ranges from one, two, three and five mil. Five mil wire is typically used for very high current applications. More often than not, one mil wire gauge is used, but in some cases, sub mil — 7/10 of a mil — is also used.

It’s highly advantageous for IoT device OEMs to get a good handle on how best to protect bare dies and their associated wire bonding. That way, OEMs can assure themselves of high levels of product reliability.

Methods to protect die and wire bonding

There are two distinct sealing compound methodologies for protecting the die and wire bonding. One is called by the unusual name of glob top, which actually fits very well since a glob of epoxy is placed on top of the die to protect it.

Dam and fill is a similar die sealer, which is in this same glob top category. It involves creating a dam or wall around the die and associated wire bonding by using a high viscosity material. Then, the middle or cavity surrounded by the dam is filled with a low viscosity epoxy. Thus, the high and low viscosity materials act as an effective protector of the die and wire bonding.

Lid and cover is the second encapsulating protected method. It can be a ceramic, plastic or glass lid depending on customer specifications and application. Such a lid can be soldered onto the substrate if the material is aluminum, nickel, gold or hot air solder leveling.

In some cases, a specialized lid with B-staged epoxy is provided. Most likely, it is custom made with epoxy already applied on the lid or cover. All that is needed in this case is to cure it and then apply it around the die and wire bonds. While the lid and cover protection method isn’t as widespread as glob top, the lid and cover approach is used depending on specialized PCB applications.

Reliability not only depends on the right bare die sealing compound for the right IoT PCB application, but also the level of microelectronics manufacturing experience. PCB microelectronics manufacturing personnel must have a good understanding of these protection methodologies and how to accurately apply them to form a perfect microelectronics assembly.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 19, 2019  4:12 PM

Lightweight Machine-to-Machine technology emerges in IoT networks

William Yan Profile: William Yan
Cat-M, Internet of Things, iot, IOT Network, IoT protocols, IoT wireless, M2M, Narrowband IoT

Last year’s report published by Gartner Research cited that connected things in use would hit 14.2 billion in 2019 with exponential growth in the years thereafter. IoT is garnering lots of attention, and a lot organizations are considering and designing many IoT services and technologies. That being said, one of the key IoT-focused emerging technologies is Lightweight Machine-to-Machine (LwM2M) protocol, which is a device communication and management protocol specifically designed for IoT services.

What is LwM2M?

The standard protocol is published and maintained by the Open Mobile Alliance (OMA) organization. It was first released in February 2017 and initially designed for constrained devices with radio uplink. As it stands now, LwM2M is a rather mature protocol and has been around for more than five years. Within those five years, it has gone through four versions of specifications and has been tested in eight test fests organized by OMA. Compared to other IoT device management specifications, one can say that the protocol is starting to gain wide market recognition.

Lightweight M2M components

The standard Lightweight M2M components and its technology stack. Source: AVSystem.

Lightweight M2M is often compared to Message Queuing Telemetry Transport (MQTT), another IoT protocol that is arguably the most popular device communication protocol in IoT services. MQTT is maintained by the International Organization for Standardization organization and is a publish-subscribe  messaging protocol. As such, it requires a message broker for data communication.

The protocol comes with a well-defined data model representing specific service profiles, such as connectivity monitoring, temperature reading and firmware updates. Thanks to its well-defined data model structure, the standard enables common, generic, vendor-neutral and implementation-agnostic features, such as secure device bootstrapping, client registration, object and resource access, and device reporting. These mechanisms greatly reduce technology fragmentation and decrease potential interoperability errors.

What are the major advantages of LwM2M?

LwM2M is gaining recognition and starting to be adopted for facilitating IoT deployments due to its specific benefits. These include the following:

  • . The lightweight protocol guarantees ultra low link utilization.
  • Working over links with a small data frame and high latency, as applicable to most IoT use cases.
  • Greater power efficiency through Datagram Transport Layer Security (DTLS) resumption and Queue Mode, which reduces energy usage and make the protocol suitable for devices in power saving mode and extended Discontinuous Reception modes.
  • Support for both IP and non-IP data delivery transport which minimizes energy consumption.
  • Optimized performance in cellular-based IoT networks such as Narrowband-IoT and Long Term Evolution Cat-M.
  • Support for low-power wide area network binding.

LwM2M also meets the needs of enterprises that have to balance multiple factors — such as battery life, data rate, bandwidth, latency and costs — impacting their IoT services.

Who can benefit from the LwM2M protocol?

Lightweight M2M is becoming important for enterprises and service providers alike because of its successful use  in IP and non-IP transports. It provides device management and service enablement capabilities for managing the entire lifecycle of an IoT device. The protocol also introduces more efficient data formats, optimized message exchanges and support for application layer security based on Internet Engineering Task Force (IETF) Object Security for Constrained RESTful Environments (OSCORE).

What does the future hold?

As a technology, Lightweight M2M is continually evolving. There’s an active OMA group that is constantly working on advancing the technology. The next specification release expected is a 1.2 version, which will provide support for many new things in a number of areas, such as supporting MQTT and HTTP; using IETF specification for end-to-end secured firmware updates; introduction of a dedicated gateway enabling group communication; and optimization efforts, such as registration templates and DTLS/TLS 1.3 support.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


August 14, 2019  2:34 PM

New-age intelligence systems for oil and gas operations

Abhishek Tandon Profile: Abhishek Tandon
Internet of Things, iot, IoT analytics, IoT data, IoT sensors, Machine learning

The oil and gas industry has been going through a tumultuous time of late. With volatile crude oil prices and geopolitical trends putting pressure on supply, it is becoming imperative for oil and gas companies to manage costs through operational effectiveness and minimize any production hurdles due to unplanned downtimes and unforeseen breakdowns.

Before making production decisions, organizations must understand the complex beast that is upstream operations with data points to analyze, including seismic and geological data to understand the ground conditions; oil quality data to determine gas oil ratio, water cut and submergibility; and pump calibration to ensure that it is optimized for the given conditions. Too much pressure on the pump and it is likely to break, too little pressure and it is being underutilized.

Technology is likely to be a top disruptor in the future of oil and gas operations for this very reason. IoT sensor data analytics and machine learning will enhance the machine interface and improve the effectiveness of brown-field setups. But what really comprises of a true intelligence system that is likely to disrupt this highly complex industry?

The new avatar of data analysis

There has never been a dearth of data usage in oil and gas operations. Even before data science became cool, there was a tremendous amount of statistical research that was being utilized to understand seismic and geological data and manage oil field operations efficiently. Data has always been the backbone of decision making in the oil and gas sector.

With the advent of data technologies that can handle scaling and machine learning to help operations teams and scientists make sense of the data, new-age intelligence systems are also starting to become top priorities in the long list of digital transformation initiatives.

Extracting the unknown unknowns

There are a number of prebuilt models that are used to determine the oil quality and calibrate well equipment. By feeding information into these models, field engineers have a good idea of the way the well is operating.

Machine learning starts to surgace the unknown unknowns. Machine learning makes the existing setup more sophisticated by analyzing multivariate patterns and anomalies that can be attributed to past failures. Moreover, the analysis patterns are derived from several years of data to reduce any inherent bias. Machine learning alone cannot answer all analysis questions. It is one piece of the puzzle and enhances existing knowledge acquired through years of research.

Constituents of a new-age intelligence system

The speed at which organizations receive data and conduct analysis is of the utmost importance. Hence, a sophisticated decision system needs to deliver insights quickly and with tremendous accuracy. A disruption in an oil well can cause a revenue loss as high as $1 million per day.

A true decision support system should have IoT infrastructure, real-time monitoring systems, supervised learning models and unsupervised learning models. IoT infrastructure includes low power sensors, gateways and communication setups to ensure that all aspects of well operations are connected and providing information in near real time. Real-time monitoring systems allow constant monitoring of the assets driven by key performance indicators and look for any issues or spikes that can be caught by the naked eye. Typical scenarios that real-time monitoring systems would cover include existing oil production, temperature and pressure of the well pumps and seismic activity around the well site.

Supervised learning models predict for known patterns and issues. These rely on past information of failures and models that have been honed over time in experimental and production setups. Organizations can use models for predictive maintenance of the pumps and pump optimization for higher productivity. Unsupervised learning models look for anomalies and possible signs of degradation. They utilize complex multivariate pattern data to determine correlations and possible deviations from normal behavior. Unsupervised models determine multivariate correlations between productivity and operational parameters using neural networks and identify early signs of pump degradation using time series analysis and anomaly detection to reduce the probability of a pump breakdown.

Components of an intelligence system. Source: Abhishek Tandon

It is difficult to rely on one type of system. Constant improvements require a combination of human intelligence and machine intelligence. Due to the plethora of prior knowledge available to run oil wells effectively, machine learning and big data technologies provide the right arsenal for these systems to become even more sophisticated. A new-age intelligence system becomes a combination of known knowledge through existing models and unknown patterns derived from machine learning.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: