IoT Agenda

May 17, 2019  11:21 AM

Mastering IoT compliance in a GDPR world

Reggie Best Profile: Reggie Best
Compliance, GDPR, Internet of Things, iot, IoT compliance, iot security, Network monitoring, network visibility, Policy Management, securing IoT

Believe it or not, the one-year anniversary of the date the General Data Protection Regulation went into effect is almost upon us. The May 25, 2018 date will live in infamy for many organizations — particularly those that scrambled to get their people, processes and technologies in order ahead of the GDPR deadline.

But now that we’re a year in, and the initial chaos has died down, it’s the perfect time to reflect upon how GDPR has impacted organizations over the past 12 months — and not just from a data privacy perspective, but from an overall risk standpoint. With this in mind, I’d like to discuss how the regulation has prompted organizations to take IoT security more seriously.

IoT visibility is central to GDPR compliance

IoT has exploded the attack surface, making complete visibility into all connected endpoints across all computing environments a major challenge for many IT security teams. In my first IoT Agenda post, I talked about how, for many of our enterprise customers, it’s not uncommon that we profile a third or more IP-enabled endpoints as IoT-type devices — but many IT security teams don’t even know these devices are on their networks.

If you don’t know an IoT device is on your network, how can you protect it? You can’t. Not only does this introduce significant enterprise risk, but, in the case of GDPR, failing to implement proper security controls and data privacy measures on IoT devices leaves you vulnerable to compliance fines and associated consequences, such as a damaged reputation and loss of customers.

Rather than risk noncompliance, many organizations are getting serious about IoT security. And it all starts with visibility.

Network infrastructure monitoring technology helps organizations unearth unknown networks and attached endpoints to gain complete network visibility into all assets residing across all computing environments, as well as data at rest, data in transit and data in process. Armed with this information, internal teams can answer important questions such as: What IoT devices are on corporate networks? What data is being held, where and why? Who’s accessing that data and is access appropriate? Most importantly, with an accurate understanding of the state of their network infrastructure, IT security teams can protect all IoT devices and the data they analyze and transmit as specified by GDPR.

Beyond understanding what IoT devices are on corporate networks, IT security teams must also know where they are, what they’re doing and who they’re communicating with to ensure proper security and compliance measures and to protect personally identifiable information. Helpful practices to consider include monitoring IP addresses on the network and how they’re moving, identifying potential leak paths and unauthorized communications to and from the internet, and detecting anomalous traffic and behavior.

The role of automated policy management

If knowing is half the battle; the other half comprises action. Once companies have a comprehensive understanding of the endpoints and data residing on their networks, they can develop “zones of control,” bringing each under the right network policies and access rules.

Automated policy orchestration tools help companies achieve continuous security and compliance with regulations, such as GDPR, because they enforce appropriate access policies, rules and configurations on all assets, regardless of how they change or move. Additionally, in the event of noncompliance, policy orchestration technology makes it easier for IT security teams to identify where the violation occurred — a capability that comes in especially handy if an organization needs to meet GDPR’s 72-hour breach notification deadline.

Prioritizing IoT security

To achieve IoT compliance in a GDPR world, organizations must have real-time visibility across all of their networks, devices, endpoints and data. They must be able to immediately detect any suspicious network behavior or compliance gaps. And they must automate response so they can quickly remediate security and compliance violations. Network infrastructure monitoring technology and automated policy management, along with the above tips, are a good start to achieving not only GDPR compliance, but a stronger security posture. And now that most companies have mastered the basic building blocks of GDPR, hopefully we’ll continue to see IoT security and compliance become a greater priority over the coming year.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 16, 2019  2:54 PM

Scaling the industrial internet of things

Michael Schallehn Profile: Michael Schallehn
Enterprise IoT, IIoT, industrial internet of things, Industrial IoT, Internet of Things, iot, IoT integration, IoT partners, smart manufacturing

Back in 2016, predictive maintenance was forecast to be one of the most promising uses of industrial IoT. It seemed like a no-brainer: Who wouldn’t want better information to prevent equipment failure?

So it’s somewhat surprising that predictive maintenance has failed to take off as broadly as expected. A recent Bain survey of more than 600 executives found that industrial customers were less enthused about the potential of predictive maintenance in 2018 than they were two years earlier. In our conversations with buyers, we heard that implementing predictive maintenance systems has been more difficult than anticipated, and it has proven more challenging to extract valuable insights from the data.

Predictive maintenance is just one of many IoT use cases that customers have had difficulty integrating into their existing operational technology and IT systems. As companies in the industrial sector have invested in more proofs of concept, many have found IoT implementation more challenging than they anticipated.

Because of this, we find that customer expectations, while still bullish for the long term, dampened slightly for the next few years (see Figure 1). Our 2018 survey found that buyers of industrial IoT services and equipment expect implementation to take longer than they thought it would back in 2016.

Figure 1: The IoT outlook for 2020 has dampened, but long-term targets remain bullish. Note: Red dashed lines indicate 2016 forecasts.

Bain’s 2018 survey also found that among industrial customers, concerns over integration issues — in particular, technical expertise, data portability and transition risks — have become more acute over the past two years (see Figure 2).

Figure 2: More experience with proofs of concept has shifted IoT customers’ concerns in the past two years.

  • In 2016, customers were most concerned about security, returns on investment and the difficulty of integrating IoT systems with existing IT and operational technology.
  • In 2018, security and integration were still top concerns, indicating that tech vendors haven’t made much progress in addressing them.
  • Fewer customers are concerned about ROI than in 2016, perhaps because they have been satisfied by the returns on their early implementations. Industrial IoT use cases are beginning to deliver vendors’ promises.

Customers are increasingly worried about issues that arise during implementation: technical expertise, difficulties in porting data across different formats, and the transition risks. Proofs of concept have revealed these challenges, and companies now realize that although the effort pays off, the devil is in the details.

Despite these barriers, industrial IoT remains a promising opportunity. Bain’s research indicates that the industrial portion of IoT — including software, hardware and system solutions in the manufacturing, infrastructure, building and utilities sectors — continues to grow rapidly, and could double in size to more than $200 billion by 2021 (see Figure 3).

Figure 3: The industrial IoT market could reach $200 billion by 2021.

To capture that opportunity, device makers and other vendors of industrial and operational technology need to dramatically improve their software capabilities — not a historical strength for most of them. Leaders are investing heavily in acquisitions to obtain the necessary capabilities and talent. Most of this M&A activity targets companies further up the technology stack — in the realms of software and systems integration — than the core capabilities of industrial companies.

As vendors and manufacturers work to build scale, four groups of actions can help position them for long-term success:

  • Concentrate your bets. Focus on select use cases and tackle the key barriers to adoption: security, ROI and integration with IT and operational technology. Learn from proofs of concept and develop repeatable playbooks. Package IoT solutions into scalable products that you then can roll out to customers.
  • Find good partners. Acknowledge your capability gaps and find partners to address them. Work closely with cloud service providers, analytics vendors or enterprise IT vendors. At the same time, avoid broad and unwieldy alliances with too many players; partnerships tend to be more effective with a selective approach based on the use case.
  • Understand it may take a while to break even. Building capabilities and forging strong partnerships takes time, so commit to long investment periods. Approach the effort with a realistic view of the funding, timeline and staffing changes needed to deliver results.
  • Identify new talent. Your best employees excel at their jobs, but new operating models may require different skills. Learn to identify, hire and retain the entrepreneurial talent to thrive in your evolving business model.

Finally, companies will need to be clear on where IoT fits into their operating model. Some executives worry about new products cannibalizing existing products and their revenue. Companies need to allow internal entrepreneurs to build new lines of business without alienating the rest of the organization.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

This article was cowritten with Peter Bowen, Christopher Schorling and Oliver Straehle, partners with Bain’s Global Technology practice in Chicago, Frankfurt and Zurich, respectively.

May 16, 2019  12:26 PM

It’s time to get serious about securing the internet of things

Dan Timpson Profile: Dan Timpson
Certificate authority, digital certificate, Internet of Things, iot, IoT authentication, iot security, IoT threats, PKI, Public-key infrastructure, securing IoT

Not long ago, many IT leaders viewed IoT as little more than an interesting science project. Today, companies in every industry rely on IoT insights as part of their core business strategies. According to DigiCert’s recent “State of IoT Security Survey 2018” (registration required), 92% of companies expect IoT to be important to their business by 2020. In all, analysts project the global IoT market to more than double by 2021, reaching about $520 billion.

That’s a whole lot of new devices popping up on the world’s networks. And there’s one group that can’t wait to get their hands on them: cybercriminals. With nearly 10 billion IoT devices forecasted to come online by 2020, attackers see billions of new potential attack vectors. The fact that many connected devices still ship with inadequate security makes them even more attractive targets.

“Businesses are bringing insecure devices into their networks and then failing to update the software,” said Vik Patel in a recent conversation with Forbes. “Failing to apply security patches is not a new phenomenon, but insecure IoT devices with a connection to the open Internet are a disaster waiting to happen.”

For some companies, the disaster is already here. According to the DigiCert survey, among organizations struggling to master IoT security, 100% experienced a security mishap — IoT-based denial-of-service attacks, unauthorized devices access, data breaches, IoT-based malware — in the past two years. Those issues can carry a big price tag. A quarter of struggling organizations reported $34 million or more in incurred costs from IoT security mishaps.

Fortunately, the IoT security problem is far from intractable; there are mature, proven strategies that organizations can employ to secure connected devices. But the key is to take those steps before a vulnerability or breach is identified instead of trying to retrofit devices after the fact. The most successful organizations employ a security-by-design approach using public key infrastructure (PKI) and digital certificates. Using PKI to reinforce the security basics — authentication, encryption, data and system integrity — you can keep your IoT footprint ahead of the threat.

Why PKI?

Based on the same PKI standard that millions of websites rely on every day for secure connectivity, PKI provides an ideal framework for mutual trust and authentication in IoT. In addition to encrypting sensitive traffic, PKI verifies that IoT devices — and any users, devices or systems communicating with them — are who they claim to be. When all parties to IoT communications have a trusted digital certificate vouching for their legitimacy, it becomes much harder for malicious actors to, for example, hijack a device or inject malware into its firmware.

PKI is a perfect match for the exploding IoT sector, as it can provide trust and control at massive scales in a way that traditional authentication methods, like tokens and passwords, can’t. PKI provides:

  • Strong data protection: PKI can encrypt all data transmitted to and from IoT devices, so that even if a device is compromised, attackers can’t do anything with the data.
  • Minimal user interaction: With digital certificates, PKI authenticates users and devices behind the scenes, automatically — without the interruptions or user interaction required by passwords and token policies. Certificates also provide stronger identity by including information such as the device serial number.
  • Secure code: Using code signing certificates, companies can sign all code on the device firmware, assuring only trusted code can operate on the device. This protects against malware and supports secure over-the-air updates to the device.
  • Effortless scalability: Originally designed for huge networks and web services with vast numbers of users, PKI can easily scale to millions of IoT devices.

Companies typically choose from two options for deploying PKI: implementing and operating their own private PKI framework on-premises, or using hosted PKI services from a public certificate authority. Which approach is right for your organization? Let’s evaluate the four C’s.

The four C’s of PKI

Consideration #1: Control
How much control do you need over your certificate infrastructure? It often depends on your industry. In heavily regulated industries with complex, rigorous compliance requirements, many companies keep everything in-house. This does provide fine-grained control and comprehensive auditing capabilities. But it also requires significant time, money and expertise. Someone in the company must “own” the process of ensuring the framework adheres to industry standards, enforcing policies to establish trusted roles, managing key ceremonies and data storage policies, ensuring reliable certificate renewals and revocations, and much more. The resources required to do PKI in-house right — and the potential to do it wrong and cause significant damage — is often more than a company wants to take on.

This is not a small effort, and it’s not for amateurs. This is why many companies in less-regulated industries, and even many in regulated ones, prefer a hosted solution, letting a public certificate authorities handle all the complexity. If you need the control of an on-premises system but you don’t want the management headaches, some PKI providers offer hybrid models. These combine on-premises systems that can issue publicly trusted certificates through a secure gateway that communicates directly with a scalable cloud issuance platform.

Consideration #2: Cost
When you deploy and manage your own private PKI framework, you can build exactly the system you want. But don’t expect it to come cheap. Standing up an internal certificate authority entails initial hardware and software acquisition, and often extensive investments in training and personnel.

Beyond the initial implementation, expect to devote ongoing resources to maintaining the on-premises PKI framework: keeping up with audits, tracking evolving industry standards, updating hardware and software, as well as ensuring device integrity throughout the lifecycle. The total cost of ownership can be significant. This is why most companies that have the option choose hosted PKI offerings with more manageable, predictable economics.

Consideration #3: Crypto-agility
If your PKI is going to actually protect your IoT footprint — and your stakeholders’ or customers’ data — it needs to use up-to-date cryptography. That doesn’t happen automatically. Whoever owns the PKI framework needs to monitor and participate in standards groups to stay ahead of changing threats and implement continually evolving protocols. If you’re operating your own on-premises certificate authority, make sure you build that ongoing effort into your PKI budget.

Here again, companies across the board increasingly opt for cloud-hosted PKI. When standards shift or cryptographic properties change, a hosted PKI provider — whose core business entails investing in PKI staff and architecture — is ready for it. The leading public certificate authorities typically anticipate changes to curves, algorithms and hashes well before they are widely known or implemented. Getting ahead of quantum computing threats to today’s encryption algorithms looks to be the next frontier.

Consideration #4: Certificate Management
Managing the full lifecycle of certificates across a large volume of devices — even millions or billions of them — is not an easy task to run in-house. This requires a technology stack and strong policies and procedures to issue, install, renew and revoke certificates. Many vendors look to a trusted third party with automated offerings to discover and manage certificates, and especially one with a track record of having already provided certificate-based authentication for billions of connected devices.

Don’t put off IoT security

The days when you could launch an IoT initiative without a sound strategy to authenticate devices and ensure data and system integrity are over. IoT is very much on the radar of cybercriminals. And the costs of reactive, after-the-fact security can easily climb to tens of millions of dollars. On the flip side, those companies that do IoT security right can reap major benefits. Bain & Company found that enterprises would buy more IoT devices — and pay up to 22% more for them, on average — if they were more confident that they were secure.

Before launching any new IoT application, make sure you’re building standards-based PKI security and authentication into the basic design of your architecture. Whether you manage certificates yourself or work with a hosted certificate authority, you’ll sleep better knowing your IoT footprint can’t be easily compromised. And your business will be able to capitalize on the full power and potential of IoT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 16, 2019  11:50 AM

The realities of enterprise data lakes: The hype is over

Joy King Profile: Joy King
Big Data, Data Analytics, Data lake, Data Lakes, Edge analytics, enterprise data hub, IoT analytics, IoT data

For the last decade, we have seen an interest expand to an obsession: grab the data, store the data, keep the data. The software industry saw an opportunity to capitalize on this obsession, leading to an explosion of big data open source technologies, like Hadoop, as well as proprietary storage platforms advertising their value as “data lakes,” “enterprise data hubs” and more. In a growing number of industries, the goal has been achieved: Ensure you have as much data as possible and keep it for as long as possible.

Data is the new oil, but mining for value requires lots of pipes

Now comes the next phase of any hype cycle: reality. Data is indeed the new oil or the new gas, but none of this matters if value cannot be mined from the data. The oil and gas industry has an advantage. In each identified location, an oil well is created by drilling a long hole into the earth and a single steel pipe (casing) is placed in the hole, allowing the oil to be extracted. When the oil is extracted, it is processed and then brought to market. No integration with other oil repositories is necessary. Unfortunately, this is not the case when drilling for business value in individual data lakes and data hubs.

The manufacturing industry, and specifically manufacturing plants, is one of the most complex examples of how data is collected but is limited in value. Each plant collects its own data and, in some cases, stores that data in public or private clouds. The plant can (sometimes) use that data to optimize its own environment, understand what is happening and maybe even predict what is going to happen. But what about the trends, the insights and the continuous improvement practices that could benefit the multiple and widely distributed manufacturing plants for a large enterprise? What about the optimization between manufacturing, inventory management, supply chain and distribution? All of these groups have their own data, but a single pipe can’t reach it.

Beware of centralizing the data

So, what is the solution? Some would say that it’s critical to centralize the data, to ensure that it is co-located in a single public cloud object store or a centralized data warehouse. But the 1980s are well behind us. Nevertheless, this approach is gaining some attention in the market from some of the cloud vendors and cloud-specific systems. They have a great motivation to reach for the data because of a newer and very dangerous term: data egress. Getting data into a central location is not easy, but it is doable. Getting data out of a single cloud or solution provider is very, very difficult and expensive because once the data is within a single environment, the vendor has control. The reality of distributed data is what we have to address, and this requires a completely different approach. The new reality is bringing the analytics to the data where it resides and in what format it needs to be but ensuring that this does not result in a tangled mess of pipes.

Deriving business value with analytics

Successful industry disruptors focus on the business value derived from the analytical insights from their data, not simply the data collection. They each start and achieve an end goal in mind with a unified analytics platform that respects the data format and the data location, and applies a consistent and advanced set of analytical functions without demanding unnecessary, expensive and time-consuming data movement. A unified analytics platform is also open to integration within a broader ecosystem of applications, ETL tools, open source innovation and, perhaps most importantly, security and encryption technologies. On top of it all, a unified analytics platform delivers the performance needed for the scale of data that is the new normal in today’s world.

The hype cycle of data lakes is over, and the reality and the risk of data swamps are real. Combined with the confusion and uncertainty regarding the future of Hadoop, the time is now to architect — or rearchitect. And it’s imperative to start with the right end goal in mind: how to mine the data in a unified, protected and location-independent way without creating delays that undermine the business outcome.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 15, 2019  4:07 PM

How the 5G RAN supports the internet of things

Morné Erasmus Profile: Morné Erasmus
5G, 5G network, Internet of Things, iot, IOT Network, IoT networking, radio access network, RAN, Virtualization

Over the last few years, two acronyms that offer a vision of the future have become ubiquitous across the technology and communications industries: IoT and 5G. IoT is a broad term describing a future in which much of the electronic communications will be between autonomous devices. 5G is the fifth generation of mobile wireless. Let’s look at how the 5G radio access network (RAN) will support IoT.

IoT envisions communications between billions of devices. Although previous generations of mobile technology have provided some capability for machine-type communications, like meter reading and asset monitoring, these capabilities have either been designed as “over the top” custom applications or they have been built into 4G standards as an afterthought — think Narrowband-IoT and LTE-M, for example. 5G is the first standard to support machine communications from the beginning; the standard supports massive machine-type communications and ensures that the RAN will meet these needs.

Beyond changes to the standard, however, serving broad-based IoT requirements leads to additional considerations when designing the 5G RAN. Users will have high expectations that there will be sufficient coverage to deliver service to IoT devices anywhere they are installed, whether inside buildings or in the outside environment.

5G networks are being designed around three core application models:

  1. Speed — Enhanced mobility broadband
  2. IoT — Mass device deployments
  3. Ultra-low latency applications

How does the 5G RAN meet these challenges?

5G networks are being designed to be almost 10 times faster than 4G technology, so they support a far wider range of applications.

Source: Wiredscore

5G supports 10 times as many connections per square kilometer, which is important because there will be billions of IoT devices to connect. Support for more connections translates to less equipment in the network, smoother deployments and faster deployment times.

In addition, the 5G RAN will extend to both indoor and outdoor radio sites. We will need coverage in buildings and factories, so there will be a mix of indoor and outdoor network equipment. In-building wireless, including small cells and distributed antenna systems, will drive RAN signals into buildings, while outdoor applications will be supported with everything from macro cell towers to small cells.

Low latency
Support of services that require low or ultra-low latency can be achieved by optimizing the location/distribution of the baseband processing elements in the radio access network. This is supported within 5G standards by moving time-sensitive elements of baseband processing closer to the network edge.

In 5G, the baseband elements are broken down into a centralized unit (CU) for the non-real-time functions and a distributed unit (DU) for the real-time functions. To achieve minimum latency in the network, the DU and/or the CU are moved close to the network edge, typically to the radio access node or to a hub location.

Another major change in 5G architecture is the amount of virtualization in the RAN. A lot of the traditional centralized components will become virtualized and run on server platforms, which is ideal for IoT because it provides a common data center architecture that houses both data center resources and a piece of the RAN.

Look back on how computer virtualization changed IT architecture a decade ago by using underutilized compute resources among multiple devices. We are seeing the same shift in the RAN network, where a baseband unit’s capabilities are being shared between multiple cell sites.

Virtualized RAN components will be less expensive because they run as software on standard servers rather than the proprietary, hardware-specific devices used in the past. There will be variations depending on what the IoT network is trying to do; for example, low-latency applications will require that some of the RAN components are located closer to the end devices. Remote surgery and autonomous driving are use cases.

So, in terms of raw capabilities in the standard, a denser network and virtualization, the 5G RAN will support IoT applications with higher speeds, lower latency and greater reach. 5G will be the first cellular standard that satisfies IoT’s huge demands for connectivity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 15, 2019  12:54 PM

Limiting bias and inexperience in AI-powered factories of the future

Saranyan Vigraham Profile: Saranyan Vigraham
ai, AI algorithms, AI designers, IIoT, industrial AI, Industrial IoT, Industry 4.0, Internet of Things, iot, smart manufacturing

The United Nations Sustainable Development Goals eight and nine are important in the context of Industry 4.0 and industrial IoT. SDG-8 calls for decent work and economic growth, while SDG-9 calls for innovation in industry and infrastructure. The purpose of the SDGs is to improve social conditions and advance humanity. AI plays a critical role in accomplishing this. For instance, let’s look at the innovation that’s happening in the Industry 4.0 space and where AI systems are proving efficient in preventing human errors and improving efficiency. The case studies from early AI systems clearly demonstrate that AI can not only improve efficiency metrics, like yield and throughput, but it can also reduce material waste and harmful emissions. In these scenarios, AI will create a net gain for us as society, improving human conditions.

AI can transform humanity by giving time back to humans to focus on more productive tasks. There are new skills to be learned and it is clear that specific types of work will be displaced by new ones. For the sake of this article, let’s assume that we are able to empower our current factory workers with new skills that make them relevant and productive in the age of AI. If we do that, are we all set? Is that the only societal challenge we have for realizing the potential of AI completely?

In an ideal world, there are AI systems working seamlessly with humans to create factories of future that are lean, efficient and environmentally friendly. But we are far from that ideal world for two reasons: the current infrastructure present in industrial setting to collect and provide accurate data, and algorithmic biases.

There are different ways of architecting AI systems. The most common way is to model the behavior of the world through data and make decisions based on the realized model of the world. As you can see, this is problematic. What if the data is not accurate? What if we don’t have enough data? What if our data only partially captures the world we want to model?

With the last surge of industrial IoT revolution, there was a surge of data available in factories. This opens the door to applying AI to factory operations. The challenge, however, is that the data is not ideal in several ways. Data collection processes were never optimized for a future AI application, rather they were built for simple responsive actions and decision-making. This shows up when the data is used to create machine learning models for building smart automation or predictive maintenance tools. Some problems with data can include incorrect sample rate, compressed or lossy data, incorrect sensor readings through faulty sensors or mechanical degradation, and so forth.

Algorithmic bias in AI, simply put, is a phenomenon where an AI deployment has a systematic error causing it to draw improper conclusions. This systematic error can creep in either because the data used to model and train the AI system is faulty, or because the engineers who created the algorithms had an incomplete or biased understanding of the world.

There have been several articles published about the human bias contributing to biased AI systems. There is well-documented evidence of AI systems showing biases in terms of political preferences, racial profiling and gender discrimination. However, in the context of Industry 4.0 applications, they are as big of a problem as data bias.

Going back to the SDG goals discussed above, we should aspire to improve the human conditions by providing people meaningful work. Let’s take an example of Ernesto Miguel, who has worked at a cement factory as a plant operator for the last 30 years. Ernesto spends most of his time ensuring the equipment under his watch functions efficiently. Over the last three decades, he has formed an intimate bond with the machines in his factory. He developed extraordinary abilities to predict what might be wrong with a machine by hearing the sound it makes. He can do more, like training more workers to be intuitive like him. He wants to share his expertise, but unfortunately Ernesto spends most of his time reacting to equipment problems and preventing failures. This is a problem ripe for AI.

We deployed one of our AI systems to model a crucial piece of plant equipment — a cooler — in a cement factory. The idea was to learn how adequately we could model equipment behavior by looking at two years’ worth of time series data. The data provided a great deal of insight into how the cooler was operating. Using the data, our engineers were able to identify correlation between different inputs to the equipment and its corresponding operating conditions.

If this worked flawlessly, we would accomplish two goals: use smart AI systems that could keep the equipment functioning in an optimum way and allow Ernesto to focus on more meaningful work, such as effectively training other factory workers.

Bias creeps in inadvertently when AI system designers confuse data with knowledge.

It was a big moment when the first AI system was deployed in the cement plant. We don’t yet live in a world where we can trust machines completely, and for good reason. So, there was a safety switch included for the plant operator to intervene if something went wrong. The first exercise was to run the software overnight, where the AI system monitored the cooler and was responsible for keeping it within safe bounds. To the delight of everyone, the system successfully ran overnight. But that joy was short-lived when the first weaknesses in the model started appearing.

The cooler temperature was increasing. And the model with an established correlation between the temperature and fan speed kept increasing the fan speed. In the meantime, the back grate pressure rose above the safe value. But the model identified no correlation between the back grate pressure and the temperature and felt no need to adjust the back grate pressure in its objective of bringing down the cooler temperature. The plant operator overrode the control and shut off the AI model.

An experienced plant control would have immediately responded to the increasing back grate pressure as it is detrimental to the cooler’s operation. How did the AI model miss this?

In his 30 years, Ernesto never had to wait for the grate pressure to build up before reacting. He just knew when the pressure would build up and proactively controlled the parameters to ensure that the grate pressure would never cross a safe bound. By merely looking at the data, there was no way for the AI engineers to determine this. The data alone without context would tell you that the grate pressure would never be a problem.

Bias hurts AI systems in many ways. The biggest of all is that it takes trust away from these systems. On top of watching his workers and equipment, Ernesto will have to watch the AI models. He has to teach the system to do things differently, which the system then has to learn. The next versions will improve. This will always be a problem when we model AI systems purely from incomplete or inaccurate data. In industrial IoT settings, this will always be the case because data will be inaccurate or incomplete.

As technology builders, what does this mean for us? How do we realize the full potential of industrial AI systems? The answer lies in us starting to design these systems with empathy and taking a thoughtful approach:

  • We cannot assume that data is a complete representation of the environment we are aspiring to model.
  • We need to spend time doing contextual inquiry — a semi-structured interview guided by questions, observations and follow-up questions while people work in their own environments — to understand the life of the workers who we are trying to empower AI systems with.
  • We need to assess all the possible scenarios that could occur in the problem we are trying to solve.
  • We need to always start with a semi-autonomous system and only transition to fully autonomous system when we are confident of its performance in production environments.
  • We should continually adapt and train models to learn about the environment we are operating in.

Bringing AI into factory settings is more than just technology. It is about people. It is also about doing something with empathy and understanding the people whose lives the technology is going to touch.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 15, 2019  11:43 AM

Gaining the monetization edge in IoT deployments

Jack Norris Profile: Jack Norris
ai, containers, Edge computing, edge processing, Internet of Things, iot, IoT analytics, IoT cloud, IoT data, IoT edge, IoT monetization, IoT use cases, Machine learning, monetization

It’s one thing to recognize that workloads are becoming more distributed and ushering in new opportunities. It’s entirely different to understand how distributed workloads can effectively be monetized at the edge.

Examples of edge monetization include a major medical equipment provider that has deployed thousands of edge locations within hospitals. Medical information along with machine sensor data is anonymized and transmitted to the cloud, where information across all hospital deployments is aggregated and analyzed to improve diagnostics, equipment performance and uptime. Their IoT deployment is part of a larger architecture that includes containers, machine learning and cloud processing.

Another example is a leading automobile manufacturer that is treating each autonomous driving test vehicle as a digital mobile edge that generates terabytes of data per vehicle per day. Each vehicle is part of a larger system that is constantly learning, gaining intelligence across all events and pushing the collective intelligence back out to the edge. This application focus is very similar to leading energy companies that have deployed real-time drilling applications that adjust to pursue the most optimum drill path while monitoring to prevent breakage and downtime.

The key to these applications — and monetizing the edge in general — is to understand how to coordinate each edge location as part of a larger whole. This requires collecting data from each edge to see the global picture. Analytics cannot be simply descriptive to report on historical events at the edge. The analytics should also be used to gain intelligence about events that have happened to better predict future events, such as equipment failures. However, the most significant analytics are prescriptive analytics that inject intelligence at the edge to respond in real time. These are the foundation to game-changing edge applications such as autonomous driving and real-time drilling.

Monetizing the edge is dependent on a persistent data fabric. A data fabric encompasses many different data types, files, tables, streams, videos and so forth. A fabric can also parse event streams that can act as digital threads for advanced AI. New models can replay these streams or threads, making it easy to compare new to existing models, speed burn-in time and increase accuracy. This is the important layer that can process and collect interesting event data to learn globally and act locally.

So, as you look at IoT at the edge, keep in mind that it’s not just a one-way path of sensor data that is collected centrally. A larger system of data flows to and from and across edge devices is required. And to fully monetize, you will eventually need to embrace AI, cloud and container technology.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 14, 2019  3:41 PM

Analytics: Can IoT manage what it can’t measure?

Wojciech Martyniak Profile: Wojciech Martyniak
Data Analytics, Internet of Things, iot, IoT analytics, IoT cloud, IoT data, IoT devices, IoT edge, IoT infrastructure, IOT Network, iot security

A few years ago, disruptive technologies expert and author Geoffrey Moore described the importance of data analytics in fairly dramatic terms: “Without big data analytics,” he wrote, “companies are blind and deaf, wandering out onto the web like deer on a freeway.”

The same concept applies to IoT data analytics. In most cases, organizations need the insights that their connected assets collect in real time, as IoT data has a short shelf life. When data flow is slow, achieving real-time analytics becomes impossible, meaning decisions are made without the critical insights that data is meant to provide. As a result, time-sensitive functions, like security monitoring, predictive repair and process optimization, suffer.

It’s important to understand the challenges that create such issues. Like IoT itself, the factors contributing to data flow delay and the resulting detrimental impact on analytics are complex and have grown over time. They are driven, in large part, by the sheer volume and complexities of the data that’s generated, infrastructure limitations and the latencies associated with cloud processing.

Data deluge

As IoT grows, the data it produces increases at staggering rates. A recent IoT Analytics report estimated that IoT will comprise 19.4 billion devices this year. With 10% growth in devices expected each year, it’s further estimated that there will be more than 34 billion IoT devices by 2025. Furthermore, a report from IDC predicted that IoT devices will create 90 zettabytes of data a year by 2025.

Moreover, the data that’s generated in the field doesn’t necessarily come in nice, easy-to-process packages. It can be intermittent, unstructured and dynamic. Challenges will continue to increase as machine learning gains more widespread application in the field. These complex devices require more memory and CPU, or they slow processing further.

Infrastructure and security issues

Amplifying the challenges of continuously rising volumes of data are limitations in the technology being used to collect, transfer, cleanse, process, store and deliver data. Many of these challenges are rooted in the fact that the technology was not necessarily intended for those purposes. When limitations exist in functionality and scalability, it increases the likelihood that data processing and delivery will be delayed.

As such, it’s increasingly critical for organizations to invest in new data management technologies and platforms. One option is to “try on” potential new IoT technologies before investing in a full-scale launch. Another option is to develop proofs of concept or pilot studies before a full-scale launch. Regardless, the technology used needs to be scalable and capable of handling inevitable increases in data, storage and computing demands.

Security is another important consideration. A streaming-first architecture can be valuable in this regard as it allows organizations to analyze multiple endpoint security system logs. Additionally, infrastructure component logs that are created in real time can catch breaches that wouldn’t necessarily be detected through an individual security technology.

Addressing these issues, however, is only part of a long-term management solution.

The cloud, the edge and standardization

While cloud computing is integral to IoT, sending data to the cloud — and waiting for it to be sent back — can bog down data delivery speeds. This is particularly true when large amounts of data are involved. Moving one terabyte of data over a 10 Mbps broadband network, for example, can take as much as nine days to complete.

This is where the benefits of IoT edge computing are most evident. IoT edge computing allows data processing to take place directly on IoT devices or gateways near the edge of the network, meaning the information doesn’t have to make a round trip before data is delivered. Removing the step of sending all data to a centralized repository minimizes the latency issues that come with cloud computing and resolves device complexity and bandwidth constraints.

However, this is not a one-size fits all solution. Servers and devices don’t necessarily have the computing power to accommodate enterprise data. They may have battery limitations and lack the storage necessary for analytics. This means that when greater storage and computing power is necessary, analytics have to be distributed to devices, edge servers, edge gateways and central processing environments.

Another key factor is the need to standardize the communications between devices and network elements. When multiple approaches and implementations are used, devices struggle to communicate and the flow of data is slowed.

It’s encouraging to note that work already is underway for creating standards in IoT. The ITU already has a global standards initiative in place and oneM2M is working to create architecture and standards that can be applied in many different industries, including healthcare, industrial automation and home or building automation.

Despite the range of challenges to be addressed, the consistent, timely delivery of data for analysis is doable. With a multipronged strategy and a willingness to invest in infrastructure when needed, organizations can realize the full potential of IoT, including its capacity to generate revenue.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 14, 2019  1:54 PM

Massive IoT and 5G: New technologies, new possibilities

Helena Lisachuk Profile: Helena Lisachuk
*RANGE, 5G, 5G and IoT, 5G network, Bandwidth, cellular IoT, cellular network, Internet of Things, iot, LPWAN, massive IoT

We’ve all heard the mind-stretching IoT statistics by now: 73 billion IoT-connected devices installed by 2025! $1.6 trillion to be spent on purchasing those devices — in 2020 alone! These numbers can be disorienting, but as you might expect, they’re best consumed with a few grains of salt.

Every technology has its strengths and weaknesses, and IoT is no exception. An IoT deployment isn’t for everyone, and for those for whom it is appropriate, adopting the right technology mix is less about trying to surf the huge wave of hype and more about balancing tradeoffs between different options. (I’ve written about making such tradeoffs before).

But even with all these caveats, I’m going to put a small toe in the hype waters and predict that the fifth generation of wireless communication, aka 5G — which brings together a wide range of new wireless technologies into a network of networks — will render a lot of the usual IoT communication tradeoffs obsolete. In fact, I believe 5G, along with other coming cellular technologies, will bring us an entirely new generation of IoT: truly massive IoT. Here’s how.

New technology, new possibilities

Time was that building an IoT system meant balancing performance factors like bandwidth, range and latency against support requirements like power, size and cost: Better performance also meant greater power consumption, size and cost, while low-power communications with small footprints came with severe performance limitations.

But recently introduced cellular technologies, such as LTE-M and Narrowband-IoT (NB-IoT), are offering greater power efficiency at longer ranges than anyone thought possible with cellular, with the added benefits of greater mobility, lower latency and better performance.

Together, these technologies can give IoT users lower costs and more options at the low end of performance, and entirely new capabilities at the high end:

Lower performance = Lower costs + more options
Cellular can now offer a competing option to the handful of communication protocols that previously met the needs of low-power, small-footprint systems, while at the same time driving down costs. Some areas of the world are already seeing 2G components sell for under $2 and NB-IoT for $3, where as recently as 2017 prices in the UK were in the $12-$17 range.

New options = New capabilities
Integrating existing technologies into 5G will bring new capabilities for managing networks at the higher end of the performance spectrum, creating entirely new options for IoT users. For example, many 5G networks can support the formation of mesh networks of smart devices, so individual IoT endpoints don’t need to communicate directly with wireless towers, instead working through other devices that connect to the tower. So now IoT devices in austere environments, like oil rigs or connectivity-challenged places like basements, can tap into high-speed cellular networks, opening up data-hungry use cases like predictive maintenance or AI tools.

More of a good thing

These cellular technologies paving the way to 5G won’t replace existing communications technologies, but augment them by giving IoT devices access to the whole RF spectrum. The result: an IoT that can do more things in more places with more devices.

Up to now, there’ve been few options for users at the low end of the performance spectrum, with low-power wide area networks one of the most common. The addition of a low-power, inexpensive cellular option will likely spur IoT adoption in industries where penetration had been slow. Infrastructure like pipelines or wind turbines in austere environments don’t need to transmit much data, so paying to connect them via cellular coverage — normally expensive in remote locations — hasn’t historically been a great option. And it’s been hard to create wide area networks over the vast landscapes this infrastructure inhabits. But new cellular technologies can break that tradeoff, allowing for cheaper communication with less equipment.

At the high end of the performance spectrum, entirely new use cases are opened up by 5G’s low latency and high-bandwidth connections. Augmented and virtual reality applications — previously dependent on Wi-Fi or wired internet connections — can go mobile, unlocking tremendous new value for IoT users. Consider the construction industry: Every crane you see on the skyline needs a trained operator in the cab, but it can be hard to find qualified candidates in boom times. The low-latency and high-bandwidth aspect of 5G has been used to create “connected cranes” with drivers precisely controlling huge loads from remote sites hundreds of kilometers away.

Connected vehicles provide another use case. Most autonomous vehicle technology today involves sensors feeding data from the environment into the vehicle. This arrangement helps the vehicle navigate its environment, but it’s a one-way street (pun intended) because the car can’t communicate with or influence that environment. But high-speed 5G communication can allow just that, making possible cars that not only drive themselves, but do so in collaboration with their environments — communicating real time with streetlights, parking spaces, even other cars! — fundamentally reshaping how we move around.

Start thinking of use cases today

More devices popping up in existing industries, entirely new devices appearing for the first time — this is how new cellular technologies can help drive massive IoT and make real the jaw-dropping predictions for the future. Here’s one: Some analysts predict that 5G-enabled IoT will funnel 1,000 times more data to mobile networks than before.

That means the time to start thinking about what massive IoT could mean for you is now. What does a massive IoT world — a world with IoT devices in nearly every conceivable location — look like? What might it mean for your business? Will it help you increase operational efficiency? Better connect with customers? Or offer fundamentally new services?

Thinking through these questions today is the key to being ready for massive IoT tomorrow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 14, 2019  11:24 AM

Voice assistants, AI and the cloud: An illusion of performance and privacy

Joseph Dureau Profile: Joseph Dureau
ai, AI privacy, Consumer IoT, enterprise ai, Enterprise IoT, Internet of Things, iot, iot privacy, iot security, neuromorphic chips, voice assistants

With AI, the choice between cloud performance and respect for consumer privacy is a false one. Not only can AI perform well locally, but through embedded technologies, it can also personalize understanding and thereby refine it.

The voice-activated personal assistants developed by U.S. tech giants currently flooding the market work mainly in the cloud. Despite the big players wanting to shape the world according to this 100% cloud model, it deprives users of the ability to control their own data. The only truly private solution is the use of embedded technologies, which does not require that user interactions are accessed by any third-party server.

The superiority of the cloud: A myth

A misconception, knowingly maintained by the Big Four tech companies, is that technologies hosted in the cloud perform better than those embedded locally. Although the cloud, by definition, has infinite computing capacity, cloud resources do not categorically make any significant impact on performance. For the everyday use of voice technologies, for example, having enormous computing power is simply unimportant.

The most advanced AI technologies, especially machine learning technologies, often have no need of the cloud. Embedded machine learning has become commonplace. Major smartphones and computers on the market use it for common tasks, like face identification, and in a range of other applications.

The arrival of neuromorphic chips: An impending technological leap

While it is often possible to have the same performance locally as in the cloud, the arrival of neuromorphic chips further eliminates any perceived advantages for relying on the cloud. These new chips are designed specifically for implementation of neural networks, brain-inspired models that have revolutionized artificial intelligence. Simply put, this new generation of chips makes it possible to embed fairly complicated AI without the need for the cloud’s compute power. The imminent arrival of these chips on the market will mean a technological leap in everyday terms. High-end phones are already equipped with neuromorphic chips, and the coming year will see their foray into everyday objects, including speakers, televisions and home appliances.

Continuing improvement in AI on the cloud: Another myth

A fantasy for major voice-activated assistants, such as Google Assistant or Amazon Alexa, is the ability to boil all their users’ data in the same cauldron. According to them, passage of data through the cloud should improve AI continuously and thereby perfect it without limits. This mindset is the basis for sharing user data without necessarily understanding the reasons or the implications.

For example, when granting Alexa this kind of access, few users imagined that recordings made in their homes might be shared with thousands of Amazon employees or subcontractors in the United States, India, Costa Rica, Canada or Romania to be manually categorized with the goal of enhancing the voice assistant’s performance. The need for such collection and manual labeling efforts is disputed, as competing technologies reach the same levels of embedded performance, without the need of user data for training.

Embedded technology: Toward customizable AIs

Besides the total lack of respect for user privacy, this mixing of data in the cloud has the further disadvantage of giving rise to a very generic intelligence, lacking in any sort of precision or specificity. In a generic speech comprehension model used by everyone regardless of the queries, the words are weighted according to their general probability of occurrence. For example, “Barack Obama” will carry more weight than a less popular name. So, when you say something phonetically similar like “Barbara” and the voice assistant does not hear you clearly, it is likely to assume you meant to say more popular phrase. This generic approach is therefore limited in that it does not take into account the context of the research itself.

By comparison, embedded speech recognition becomes inherently contextualized, meaning that if you are talking to a smart speaker it will know that you are referencing the musical domain and will not search for “Barack Obama” as an artist when queried with “Barbara.” Thanks to embedded machine learning, this tool will even be able to enrich users’ personal tastes, creating a customizable AI according to a person’s specific needs.

As technology moves toward an “everything connected” approach, including mail, contacts, files, chat history and so forth, the risks of going through the cloud only become more concerning. Data breaches and privacy mishaps no longer deal with a specific application or tool, but now encompass a user’s entire life. As embedded machine learning continues to make strides, it is likely only a matter of time before users no longer see the reason to risk their privacy when using their AI voice assistants.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: