IoT Agenda


October 15, 2019  12:09 PM

The future of work and enterprise IoT

Steve Wilson Profile: Steve Wilson
connected devices, Consumer IoT, Enterprise IoT, Industrial IoT, Intelligent device, Internet of Things, iot

IoT is exploding. According to Gartner, there will be 20.4 billion IoT devices by 2020. No longer just a pipe dream, IoT is shaping up to be the backbone of our technological future. Not only will it drive some really cool improvements in the way we live and work, but it will also generate between $4 and $11 trillion in economic value by 2025, according to McKinsey.

Connected devices have simplified our personal lives in so many ways. Thanks to Siri and Alexa, we’ve been freed from mundane tasks that take time away from the things we really want to do. With Nest, we can rest easy if we forget to turn close the garage door in our haste to get to work. We want this same simple, intelligent experience at work — and it’s coming.

I recently spoke with Nivedita Ojha, Senior Director of Product Management for Enterprise IoT at Citrix to see where things stand.

Steve Wilson: For a long time, IoT was kind of like the Jetsons: fun to watch and a cool concept, but no one was convinced it could ever happen. Where are we today?

Nivedita Ojha: IoT has really moved from concept to reality. It is creating new business models that are transforming entire industries and driving unprecedented operating efficiencies. Think smart shelves in retail stores and sensor-fueled tractors on farms.

Wilson: When most people think of IoT, they think of consumer applications like Siri and Alexa or industrial uses like sensors that track inventory and shipments. But the lines seem to be blurring. What are your thoughts?

Ojha: For a long time, IoT was black and white. It was either consumer or industrial. But there is a new category taking shape: Enterprise IoT. Because the industry has made things on the personal front so simple, we are now starting to see consumer devices make their way into the enterprise workspace where they can do the same thing. Bring your own device has really become “bring your own thing.” Alexa is now in the office giving commands to open a file or start a meeting. Work is becoming handsfree, intelligent and autonomous.

Wilson: And are employees happier as a result?

Ojha: Definitely. They have the freedom to use the devices they prefer to remove a lot of the complexity that bogs them down and keeps them from doing what they want and are paid to do. They are more engaged and productive as a result. At the end of the day, it’s the little everyday experiences matter the most.

Wilson: What does all of this mean for IT? It seems to open up a whole new set of challenges in terms of securing these devices.

Ojha: Managing IoT devices definitely requires a different approach to security. Traditional models don’t adequately protect against the new vulnerabilities that connected devices open. To effectively secure them, IT needs to take a more intelligent and contextual approach and put in place a model that supports roaming, wirelessly connected, mobile users without getting in the way of their experience.

Wilson: And how do they pull this off?

Ojha: It requires an integrated set of tools that combines management, security, workspace and mobility into a centralized infrastructure that allows IT to monitor and secure all types of endpoints, applications and software from a single pane of glass, and they do exist.

Wilson: Any final thoughts?

Ojha: Enterprise IoT is one of the most interesting developments on the digital transformation front that we have seen in a long time. While it is still in its nascent stage, it is fast driving the convergence of digital and physical workspaces and transforming the way work gets done in very positive ways.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 14, 2019  4:53 PM

IoT monetization framework: The IoT stack and how to monetize it

Cris Wendt Profile: Cris Wendt
IIoT, Internet of Things, iot, IoT monetization, IoT software, IoT strategy

This article is the second in a six-part series about monetizing IoT.

The previous article addressed the key goals of efficient IoT monetization. The next step in the monetization process is understanding how to monetize your IoT offering in a structured framework. The two-part framework presented here will help you move forward. Developed with software and hardware manufacturers, it delivers monetization approaches based upon the value the customer receives from an IoT product.

What is being monetized?

The IoT Solution Stack recognizes that an IoT solution isn’t necessarily a single element, but multiple elements connected conceptually as a stack to provide value individually or collectively. The embedded software in these different elements is what provides new levels of performance, capacity and functionality, forming the foundation of new value streams. This approach recognizes that a supplier may provide and monetize products that comprise only part of the stack, or that the supplier may provide a solution that encompasses the entire stack. How much of a solution you provide will very much drive your monetization strategy.

Below is illustration of this IoT Solution Stack:

Source: Flexera

Let’s look at the elements of this stack, starting from the bottom:

Device. These are the edge devices or the “T” in “IoT.” They are the high-volume software-enabled devices that are connected via the internet. Depending upon the solution, this includes myriad phones, sensors, meters, cameras, valves, switches, systems, vehicles, scanners, medical instruments and more that are networked to form an IoT solution.

These edge devices are becoming more intelligent and flexible with the increasing role of their embedded software. One example is a medical instrument that offers various software-enabled functions along with connectivity to electronic medical databases with patient and medication information.

Gateway. In the middle of the stack are intermediate control and aggregation points found mostly in Industrial Internet of Things (IIoT) deployments. A good example is the programmable logic controller (PLC) found in manufacturing environments. PLCs usually perform device management to control a subset of the edge devices in a factory floor such as switches, valves and robots. Their monetization is often derived from the rich set of different functions they provide and the number of edge devices or data they manage.

Cloud Analytics & Control.  At the top of the stack are the data aggregation, analytics, control and decision-making functions. These solutions tend to be cloud-based software that rely on combinations of big data, AI, blockchain and scalable cloud-based infrastructure technologies to make sense of tremendous amounts of data.

This level of the stack provides solutions that tend to be the “brain” of IoT networks and is used for functions such as controlling factory floors, optimizing supply chain performance, detecting defects and errors in utility networks or controlling lighting throughout a city. Value is often driven by the type of functions that are provided, scaling by the number of edge devices and data controlled and managed by this level of the stack.

As a rule of thumb, the value of IoT follows the direction of the stack; the higher up you go in the stack, the higher value the offering provides. The highest level is most aligned with producing the desired outcomes of connecting a variety of devices. Of course, given the large number of end devices in a deployment, the total dollars in a solution might be driven by the total number of edge devices.

And the highest value is achieved when the entire stack is provided by a single supplier, such as when a utility meter provider sells the entire electric, gas or water stack as a service to a municipality.

How to monetize the stack

The three dimensions of monetization form the basis for determining the structure of your monetization models. The following metrics are the underlying monetization structural elements, not buying programs or discount models to tune the actual pricing.

Source: Flexera

Revenue model. Every monetization model relies on an underlying structure for producing revenue that is designed to support a revenue-generating business model that matches the way customers want to consume the product or service. Common options include:

  1. A one-time, up-front model based upon physical goods and perpetual license model where there are ongoing transactions to sell new or expanded products
  2. A recurring revenue model based upon time or a measure of consumption (e.g. number of hours expired, number of analyses performed), where revenue is generated based upon measurements of value and usage
  3. An outcome model based upon achieving a specific outcome or measurable value, such as improvement in crop yield per year

Monetization metrics. Metrics define the units of usage or value that are used for individual product pricing. This allows product managers to effectively price those units of value that are important to customers based upon how they derive value from the product. Metrics are the units of value that enable a company to generate more revenue through increased use or adoption, for example, more customers using management software; more endpoints being managed by an IoT cloud analytics solution; or the amount of data processed for an analysis.

Product packaging. Product packaging discusses approaches to meaningfully partitioning your product functionality into different products or commercial units to grow revenue. This dimension of monetization considers different approaches for your products to meet varied market opportunities or to monetize the customer journey as customers become increasingly expert and want to increase the value they realize from a solution.

The third article in this series will cover monetization models — perpetual, subscription, usage, and outcome — and how to select the one that’s most suited to different offerings.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 14, 2019  4:30 PM

Who says cellular IoT can’t be used in farming and agriculture?

Svein-Egil Nielsen Profile: Svein-Egil Nielsen
cellular IoT, Internet of Things, iot, IoT application, IoT connectivity, IOT Network, smart farming

I’ve been reading and hearing a lot recently about cellular IoT not being suitable for smart farming and agricultural applications. The main reasons given are that it uses too much power, and does not have the coverage or range needed.

But that doesn’t reflect the world that I am seeing. For example, Finnish startup, Anicare, is already using narrowband IoT (NB-IoT), one of the two categories of cellular IoT, the other being LTE-M. Both track the health and location of farmed reindeer — and other herding animals — that spend most of the year in the wild.

The Anicare Healtag tracker is attached to an animal’s earflap, and autonomously measures vital functions for up to five years. Data is sent via cellular network to the cloud for immediate access via a smartphone application. This means injured or sick animals can be automatically identified for rescue and treatment. This device is very small, and is testament to the low power capability of cellular IoT technology.

But how can this application be helpful when used regions of Lapland and Scandinavia, where regular cell phone coverage can sometimes be a challenge?

NB-IoT is built for range

The good news is that NB-IoT is designed to offer enhanced coverage in hard-to reach areas. This includes indoors and uninhabited rural areas. It offers 20+dB (x7) better coverage compared to LTE-M. Maximum coverage is achieved by using a low 200kHz bandwidth, and simpler signaling structures and retransmissions up to 2,048 times.

Furthermore, NB-IoT has three deployment scenarios: standalone, guard band and in band. Paired with its narrow 200kHz bandwidth, NB-IoT can deploy even to occupied lower cellular bands. These lower frequency bands have excellent propagation characteristics and provide excellent performance in terms of coverage. As such, an NB-IoT signal has a real-world range of over 30km. Indeed, the longest-range a NB-IoT connection achieved in a commercial network is 100km.

Another application example already on the market is an NB-IoT emergency alarm currently available in Holland from Dutch startup, Montr. The Montr Emergency Button is designed to protect people in vulnerable situations, such as lone professionals at risk of physical attack or isolated accident, as well as seniors living at home. During internal tests, Montr found the NB-IoT signals could penetrate into locations such as deep basements commonly found under swimming pools, which have zero traditional cellphone signal.

Cellular is everywhere

Even though cellular IoT is ready for smart farming and agriculture applications today, the future looks even rosier. There are numerous initiatives around the world now underway where cities are being blanketed in high-speed cellular coverage, and rural areas having none is increasingly being deemed unacceptable.

This is a push-and-pull scenario where there is market pull from consumers living in rural areas, and regulatory push from governments and telecom regulators to ensure it happens. For example, carrier U.S. Cellular recently announced plans to bring fairness to cellular network availability by actively targeting rural areas.

I predict, within a few years, hardly anywhere on this planet will not have access to cellular connectivity. I also predict that in certain smart agriculture and smart farming applications with particularly low-duty cycles, you will have cellular IoT-based solutions that consume such small amounts of power that they could have infinite battery lifetimes. This will be achieved through the use of energy harvesting solutions such as solar or inertial energy.

So the next time you read — or are told — that you can’t use cellular IoT for smart farming or agricultural applications, I’d question the source of that information.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 11, 2019  2:28 PM

Innovating field service IoT devices in the business 4.0 era

Gopinathan Krishnaswami Profile: Gopinathan Krishnaswami
AI and IoT, field service, Internet of Things, iot, IoT analytics, IoT data management, IoT devices, Machine learning

Developing cutting edge commercial IoT technology means being at the forefront of the high-tech industry. As part of this movement, we are hearing much talk about what it means to build next-generation high-tech field service IoT technology, and what the seemingly limitless potential could bring to enterprises of all industries. Innovation is being driven by strategic implementation of AI, machine learning and deep learning, helping bring field service capabilities and end-user convenience to new heights.

Optimizing data collection

Field service technology is unique in that it serves as a connection between end-user devices and the entire enterprise supply chain; operating at both the near and far edges. When servicing end-customers in the field, these IoT devices not only use data aggregated throughout end-device analytics, but they also generate ample amounts of data in their own right. Field service devices provide all enterprises with a gold mine of data that can be repurposed to help further optimize future and existing customer interactions, streamline supply chain operations, and further develop field service IoT device capabilities to best fit customer and enterprise needs.

For example, consider cable providers: Field service technicians are naturally required for the installation and maintenance of cable boxes, routers and other home connectivity devices. These interactions between field technicians can be optimized through AI and machine learning technologies, which crunch the data that is gathered through each customer interaction.

This data can help companies develop AI-driven field service devices that are able to understand and evaluate individual situations in real-time. By understanding both the profile of the end-user and the device history, field technology IoT devices will be reimagined through the ability to predict what tasks will be required from the technician, prior to arrival on-site. This will drastically reduce the amount of time needed for diagnostics during maintenance and repair, and will make for a better experience for the end-user as well as the technician.

Reducing energy waste through IoT innovation

Building management also has the potential to be impacted by high-tech innovation in the field service sector. If you take one specific industry field application, such as a data monitoring center, you’ll see that IoT is responsible for controlling some of the physical attributes of the center itself. Specifically, IoT controls the temperature, humidity and other things that make the day-to-day aspect of the center more comfortable for employees.

Building management device innovation can be beneficial to all companies. Many businesses have integrated IoT into the building management support, significantly reducing power consumption. This integration allows companies to proactively shut down systems that they didn’t even realize weren’t being used to reduce their own energy waste. Thus, IoT and high-tech in field technology keeps the process and operations as optimal as possible.

Challenges with incorporating IoT and high-tech

While the benefits these technologies bring to companies can be exponential, there are also challenges and minor inhibitors that might make the technology seem less glamorous to some. Optimizing and maximizing field service IoT device capabilities is still a work in progress. Part of this is due to a reliance on legacy infrastructure and minimal high-tech investment.

As these organizations continue to undergo their digital transformational journeys, they will ultimately understand the importance of building new digital infrastructures. By building a high-tech infrastructure, businesses will be able to truly reinvent their products, services and internal operations, including field technology IoT integration.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 10, 2019  2:10 PM

How to protect data privacy in connected cars

Julian Weinberger Profile: Julian Weinberger
commercial IoT, Connected car, connected car data, Consumer data, Internet of Things, iot, IoT attacks, iot privacy, iot security

From monitoring our driving habits to tracking our location, connected cars know our every move. With a black box or event data recorder collecting information inside 96% of automobiles, modern cars are as much a computer as they are a means of transportation.

However, many drivers and passengers aren’t aware of the privacy risks that come along with connected cars. Personal data stored in cars is not always encrypted, or subject to legal restrictions.

While details such as seat-belt use, speed, and braking are proving be to useful for insurance companies and law enforcement, personal information including phone contacts and text messages could also be collected without a person’s consent.

Connected cars on trial

In most countries, police only require probable cause to search vehicles and are not obliged to obtain a warrant before downloading data. The legality of this has already been tested in the courts.

Such cases call on individual courts to decide whether laws dating from before the digital age should be extended to let police gather more information than was originally intended, according to the American Civil Liberties Union. In the absence of universal legal protections, the problem will continue. Every new technology and connected device will bring up the same challenges.

Manufacturers promise security

Under regulations like EU GDPR, customers have a right to expect information will remain private unless they expressly give their consent. This puts the responsibility on the vehicle manufacturers to build appropriate security measures that will protect an individual’s personal data.

Advanced driver assistance systems (ADAS) such as collision avoiding automatic brake systems provide higher margins for manufacturers. The market for ADAS is expected to grow by more than 10% every year and reach $67 billion by 2025. Manufacturers have every incentive to ensure that these systems comply with data protection regulations and keep sensitive customer data secure.

So far, about 20 carmakers have signed up to build systems featuring built-in security. The plan is to give car owners the ability to manage the data collected in their vehicles, and obtain customer consent to use location and biometric data for marketing.

Encryption drives data protection

To better protect customer data, auto manufacturers will need to introduce encryption technology into their vehicles. VPN software can effectively encrypt data within the vehicle and as it passes over the Internet. By creating an encrypted tunnel for data communications with the auto manufacturer or smart city system, a VPN renders personal data indecipherable and protected from cybercriminals.

Overall, as advances in connected car technology and next generation bandwidth inevitably increase, the number of cases in which personal data in vehicles is analysed without the owner’s consent will continue to occur. To prevent this, manufacturers must take the proper measures to meet data protection laws. Implementing VPN software can ensure that personal information stored in event data recorders and central computer systems is secure, and safeguarded from unauthorized parties.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 9, 2019  2:35 PM

Securing your supply chain is as important as securing your devices

Wayne Dorris Profile: Wayne Dorris
cybersecurity, Internet of Things, iot, IoT cybersecurity, iot security

The Ponemon Institute published its annual report on third-party IoT risks in May, 2019. The report focused on identifying what companies do not know — or understand — about the risks inherent to IoT devices and applications, particularly when used by third parties. The report drives home the fact that, although integrators and end users understand that it is important to protect connected devices, many do not understand the full landscape of threats that these devices face.

Concerningly, the Ponemon report indicates that the percentage of organizations that have suffered a data breach, specifically because of unsecured IoT devices or applications, has risen from 15% to 26% in the past three years alone; a number that Ponemon points out is likely low due to the unfortunate reality that it is often difficult to recognize when a breach has occurred.

Perhaps more importantly, the report also notes that the percentage of companies experiencing data breaches caused by unsecured IoT devices or applications belonging to third parties has risen from 14% to 18% since 2018, and cyberattacks caused by those devices have risen from 18% to 23&. These numbers underscore the fact that even the most secure network can be compromised by failing to properly vet third-party suppliers, and securing your supply chain is often as critical as securing your devices.

Built-in security is key, but it isn’t enough

Understandably, when it comes to keeping IoT devices secure, responsibility often falls on the manufacturer. Smart — and conscientious — manufacturers have a vested interest in ensuring that their devices have strong and effective security capabilities. This is truer than ever now that national and international regulations such as GDPR have taken effect, and even been supplemented by additional regulations incorporating language that mandates information security by design and default.

Both individuals and organizations are increasingly asserting their right to expect a basic level of security when they purchase and use a device. Other groups, like the National Institute for Standards and Technology, have issued their own recommendations for creating a core baseline for securing connected devices.

The concept of by design and default means that manufacturers must make good on many individual’s expectations of built-in security; but it’s important to understand what security really means. Including built-in security measures doesn’t make a device impenetrable, nor does it ensure that users or integrators will understand how to best make use of those measures.

Measures such as creating unique default passwords for each new device can prevent certain malware infections like the Mirai Botnet and its many offshoots from taking control of massive numbers of devices at once. But it can’t prevent an integrator from creating an inadvertent backdoor into the system, nor can it stop an employee from leaving their new credentials lying around where they can be easily viewed or stolen.

Research has shown that 90% of data breaches are can be tied to largely avoidable problems including human error, poor configurations and poor maintenance practices, according to Axis Communications. While end users naturally want to know what manufacturers are doing to build security into a product on the software side, the truth is that built-in security can only do so much.

Ensuring that the knowledge is there to prevent configuration and maintenance errors is equally important. This further underscores why securing the other levels of the supply chain is a critical aspect of IoT security.

Education is key for manufacturers, integrators and end users

When bringing IoT products into a network, the integrator must work closely with the information security team to ensure clear communication of the control set for the network, the framework upon which it is built and more. The communication must happen before the installation even begins, so that the integrator understands how to approach the project in the best possible way. This means end users must understand the products they are purchasing and how they fit into the wider network.

Although manufacturers understand the benefits of built-in security controls, if an integrator or contractor doesn’t line up a network’s controls with its access users, then it doesn’t matter how securely built the products are. Even if the manufacturer has done threat modeling by having a secure software group come in to determine potential attack methods on a given product, information must be effectively conveyed to the integrator so that they can install the product using the most effective security framework. Even a minor communication disconnect between a manufacturer and an integrator can cause major problems in this manner.

Manufacturers can help improve overall security by taking steps to ensure that encrypted connections are established from the start. Many manufacturers are turning to features like Secure Boot, which only allows a device to boot using software that is trusted by the Original Equipment Manufacturer; preventing hackers or other cyber criminals from installing unknown programs onto a device. These measures are not enough on their own, but taken in conjunction with improving integrators’ and end users’ knowledge and understanding of product security, they represent an important piece of the puzzle.

Establishing trust is critical, but that trust must be earned

Vendor and manufacturer vetting are a part of the new trust dimension. Whether or not a manufacturer’s products can provide a solution and meet the technical requirements to solve a given problem are no longer the only concerns for end users.

Today’s businesses want to know the cybersecurity approach taken by the manufacturer before they purchase a given product. Today’s manufacturers have an obligation to educate their integrators to ensure that those products are being deployed in the most effective way. Conveying that to end users is important, and establishing trust in the marketplace is a multi-step process.

Transparency is key. In any industry, organizations that are open and honest will naturally foster greater trust. Similarly, demonstrating good processes and practices when it comes to securing customer data can also go a long way; particularly as data breaches make the headlines with increasing frequency.

Even though most applications on IoT devices don’t contain much Personally Identifiable Information (PII) of the type covered by GDPR, compromised devices can be used as a jumping off point for privilege escalation and cross-breach to the IT network. This is where PII and other data can be easily obtained. End users want to know they can trust a product to secure data effectively, and manufacturers with a reputation for using features like Secure Boot can help provide that confidence.

It is important to strengthen the ties between manufacturers, integrators and end users to create new avenues for communication, and ensuring that a proper level of knowledge of both the products in use and the needs of the customer is established for all parties involved.

Education is critical, and everyone must be willing to take the steps needed to build a more secure and knowledgeable future. For many, this means supply chain mapping to detail every touch point of all material, processes and shipments. As IoT devices continue to become more commonplace across a wide range of industries, securing the supply chain will only become more important.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda


October 8, 2019  5:16 PM

3 E’s of AI: Creating explainable AI

Scott Zoldi Profile: Scott Zoldi
Blockchain, Internet of Things, iot, IoT and blockchain, IoT strategy

This is the second piece in a three-part series. Read the first piece here.

I have a belief that’s unorthodox in the data science world: explainability first, predictive power second, a notion that is more important than ever for companies implementing AI.

Why? Because AI is the hottest technology on the planet, and nearly every organization has a mandate to explore its benefits by applying AI models developed internally or acquired as part of a package solution from a third-party provider. Yet a recent London venture capital firm MMC study in Europe found that 40% of startups classified as AI companies don’t actually use AI in a way that is material to their businesses. How can these startups and the customers that buy from them rely on the predictive power of their AI technology when it’s not clear if the models delivering it are truly AI?

Explainability is everything

AI that is explainable should make it easy for humans to find the answers to important questions including:

  • Was the model built properly?
  • What are the risks of using the model?
  • When does the model degrade?

The latter question illustrates the related concept of humble AI, in which data scientists determine the suitability of a model’s performance in different situations, or situations in which it won’t work because of a low density of neural networks and a lack of historical data.

We need to understand AI models better because when we use the scores a model produces, we assume that the score is equally valid for all customers and all scoring scenarios. Often this may not be the case, which can easily lead to all manner of important decision being made based on very imperfect information.

Balancing speed with explainability

Many companies rush to operationalize AI models that are neither understood nor auditable in the race to build predictive models as quickly as possible with open source tools that many users don’t fully understand. In my data science organization, we use two techniques — blockchain and explainable latent features — that dramatically improve the explainability of the AI models we build.

In 2018 I produced a patent application (16/128,359 USA) around using blockchain to ensure that all of the decisions made about a machine learning model, a fundamental component of many AI solutions, are recorded and auditable. My patent describes how to codify analytic and machine learning model development using blockchain technology to associate a chain of entities, work tasks and requirements with a model, including testing and validation checks.

The blockchain substantiate a trail of decision-making. It shows if a variable is acceptable, if it introduces bias into the model and if the variable is used properly. We can see at a very granular level the pieces of the model, the way the model functions and the way it responds to expected data, rejects bad data or responds to a simulated changing environment.

This use of blockchain to orchestrate the agile model development process can be used by parties outside the development organization, such as a governance team or regulatory units. In this way, analytic model development becomes highly explainable and decisions auditable, a critical factor in delivering AI technology that is both explainable and ethical.

An explainable multi-layered neural network can be easily understood by an analyst, a business manager and a regulator, yet a neural network model has a complex structure, making even the simplest neural net with a single hidden layer, which produces a latent feature in the model making it hard to understand, as shown in Figure 1.

I have developed a methodology  that exposes the key driving features of the specification of each hidden node. This leads to an explainable neural networkForcing hidden nodes to only have sparse connections makes the behavior of the neural network easily understandable.

Figure 1: Even the simplest neural network can have a single hidden layer, making it hard to understand. Source: Scott Zoldi, FICO.

Generating this model leads to the learning of a set of interpretable latent featuresThese are non-linear transformations of a single input variable or interactions of two or more of them together. The interpretability threshold of the nodes is the upper threshold on the number of inputs allowed in a single hidden node, as illustrated in Figure 2.

As a consequence, the hidden nodes get simplified. In this example, hidden node LF1 is a non-linear transformation of input variable v2, and LF2 is an interaction of two input variables, v1 and v5. These nodes are considered resolved because the number of inputs is below or equal to the interpretability threshold of two in this example. On the other hand, the node LF3 is considered unresolved.

Figure 2: Nodes LF1 and LF2 are resolved.  Node LF3 is unresolved because the number of its inputs exceeds the interpretability threshold. Source: Scott Zoldi, FICO.

To resolve an unresolved node, thus making it explainable, we tap into its activation. A new neural network model is then trained. The input variables of that hidden node become the predictors for the new neural network, and the hidden node activation is the target. This process expresses the unresolved node in terms of another layer of latent features, some of which are resolved. Applying this approach iteratively to all the unresolved nodes leads to a sparsely connected deep neural network, with an unusual architecture, in which each node is resolved and therefore is interpretable, as shown in Figure 3.

Figure 3: Achieve interpretability by iteratively applying a resolution technique. Source: Scott Zoldi, FICO.

The bottom line

Together, explainable latent features and blockchain make complex AI models understandable to human analysts at companies and regulatory agencies––a crucial step in speeding ethical, highly predictive AI technology into production.

Keep an eye out for the third and final blog in my AI explainer series on the three Es of AI on the topic of efficient AI.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 8, 2019  3:40 PM

Emerging IoT attack surfaces present attackers with tempting new targets

Carolyn Crandall Profile: Carolyn Crandall
Cyber, Cyber security, Cyberattacks, IIoT security, Internet of Things, iot, IoT cybersecurity, iot security

This is the first in a two-part series.

The meteoric rise of IoT has created a popular new attack surface for cybercriminals. The “2019 Sonic Wall Cyber Threat Report” indicated that there were 32.7 million cyberattacks targeting IoT devices in 2018, representing a 217.5% increase over attacks 2017. What is interesting, however, is that in the Attivo Networks 2018 Threat Detection Survey findings, securing IoT ranked just sixth on respondents’ list of attack surface concerns. This is possibly because IoT, medical IoT and other interconnected devices commonly fall outside of a security team’s responsibilities.

This circumstance is cause for concern. Adding to the gravity of the situation is the fact that these devices often have minimal security requirements, are governed by laws preventing third-party security adjustments and lack regulation, which has been slow in setting standards. With Gartner predicting that 25 billion connected devices will be in use by 2021, security teams will need to proactively search for new ways to both secure the ever-growing number of potential entry points and quickly identify attacks that use these devices for easier access to the network.

Emerging attack surfaces present new and often different challenges for defenders to overcome. Organizations must learn understand how security teams can better secure each attack surface.

Cloud environments

Flexera’s”RightScale 2019 State of the Cloud Survey” indicated that 94% of enterprises utilize the cloud, with 84% of enterprises employing a multi-cloud strategy. The rapid growth of cloud and multi-cloud environments has presented organizations with shared security models to go along with both familiar and unique security challenges, so it is not surprising that 62% of Attivo Threat Detection Survey respondents listed securing the cloud as the top attack surface of concern. What compounds this particular situation the fact that many of today’s security tools rely on virtual network interfaces or traditional connections to servers, databases and other infrastructure elements, which are no longer available in serverless computing environments.

Unsecured APIs, which allow cloud-based applications to connect, represent a significant new threat vector for cloud users. Unsecured APIs and shadow APIs represent substantial potential dangers. Similarly, the rise of DevOps comes with a new set of vulnerabilities, including the proliferation of privileged accounts. Access to applications, embedded code and credential management all require a different assessment of risk and how to secure them. Given the fluidity of the environment, detecting credential theft and lateral movement quickly gain importance.  Deception technology represents an increasingly popular approach to this, helping with the detection of policy violations and unauthorized access, as well as identity access management inclusive of credentials, exposed attack paths and attempted access to functions or databases.

Data centers

Increased focus on edge computing has driven increased traffic to data centers and presents new security challenges as data processing and functionality move closer to end users. With more and more information stored in these data centers and the growing popularity of smaller, distributed data centers, there is an increased need to reassess security frameworks and their fit for these new architectures. The arrival of 5G is also likely to fuel the growth of edge computing. In this environment, security, privacy and storage management will need additional attention. Scalability of security systems will be critical as data centers increase in size and given the distributed nature of these networks.

Whether on-premises or in a remote data center, threats may come in both internal and external forms and may be intentional or unintentional. Incomplete or inadequate screening of new employees, lack of consistent internal protocols and limited access control can create an unsafe environment for data and operations. Lack of backups and disaster recovery services can also render information vulnerable to disasters — natural or otherwise. Securing the complete digital ecosystem is about more than just fitting today’s needs. The way those needs are rapidly evolving will require assessing security frameworks and their fit in the various environments, as well as an overlay view of how an attacker would attack them. Once organizations have completed identifying risk, they can move forward with establishing a full security fabric comprised of prevention and early detection tools, as well as the programs for faster and automated incident response.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 7, 2019  3:31 PM

The 3 Ps of location accuracy: Presence, proximity and positioning

Fabio Belloni Profile: Fabio Belloni
connected devices, Internet of Things, iot, IoT applications, IoT use cases, IoT wireless, location data, real-time location services, real-time location system, RTLS

Location has become a critical component across a wide variety of organizations as part of their ever-expanding IoT implementations. As companies get more sophisticated in their knowledge of what IoT can do for their business, they’re moving beyond basic applications to use IoT to better manage key business processes.

Knowing the precise location of a person — or object — by itself, and in relation to other people and objects helps organizations better understand business processes such as inventory control in a warehouse environment as well as generating advanced player statistics in a sports environment.

While accurate location capabilities are on every company’s wish list, each has a different idea of what location accuracy means. Is location within a meter accurate enough? For some use cases and applications, it is perfectly sufficient. However, other use cases may demand much more precise levels of accuracy, perhaps even as accurate as 10 centimeters.

There are a few distinct levels of accuracy, and several technologies and methodologies by which to achieve each level. The gating factor is the use case for which location capabilities are required. Understanding what the short-term and longer-term needs are for location — and taking into consideration total cost of ownership (TCO) of the solution — will be critical to making the correct choice.

While location capabilities are important for many types of organizations, looking at the need for location services in a single environment, such as a warehouse, makes it easy to understand both the benefits and the use cases the different levels of location can enable. There are three general levels.

Presence

Detecting the presence of a person or object is the simplest location-based solution designed to answer the question: Is it present or not? Unfortunately, it’s also the least accurate location level. Presence detection is commonly used in a warehouse to determine whether an item or a pallet has arrived. Using an advanced location system using Bluetooth technology and the angle of arrival methodology, a locator is placed at the entrance of the warehouse to act as a gate through which each tagged item passes. The locator identifies the tag based on its unique ID, measures the angle of the tag’s signal and calculates the direction of motion for the tag, thereby determining whether it’s entering or exiting the warehouse. Some real-time location systems (RTLS) can determine presence, and provide real-time information within the warehouse because of their long communications range. However many are limited by unsophisticated solutions that work well outside or inside, but not both.

RFID technology can be used for presence detection by tracking objects transiting through the gates. However, scaling beyond this use case may be limited by the cost of architecting a solution using multiple gateway readers. Its inability to track items that are already inside the warehouse once the system is installed can also negatively impacting both the cost-effectiveness and usefulness as a location solution.

Proximity

Proximity-based solutions are designed to identify both the presence and location of items. Proximity detection typically uses a combination of high-accuracy positioning in areas where the use case demands it, and low-accuracy presence detection in areas where precision is less of a requirement. These solutions are ideal when uniformly accurate coverage is not needed.

This is often the case in the warehouse example, where approximate location information may be adequate for many use cases. With an RTLS that utilizes Bluetooth technology and the angle of arrival methodology, locators can be placed strategically within the warehouse, creating zones that track items in real time as they enter or leave each zone. In some deployments, locators can be positioned at strategic choke points, providing basic movement tracking. Optimizing the deployment and density of locators to enable higher location accuracy only where it’s needed provides a strong TCO without limiting potential use cases.

Other technologies that are commonly used for proximity-level location accuracy include Wi-Fi and Bluetooth Received Signal Strength Indication-based beaconing. However, they are each too inflexible to allow for deployments with non-uniform locator density. This makes it difficult to manage costs and deliver the right level of accuracy for each use case that is supported.

Positioning

Solutions based on Positioning, which is the highest level of location accuracy, are designed to reliably locate the exact position of a tracked item in real time, both inside a warehouse and nearby, such as in a storage yard. Positioning unlocks the full potential of location-based solutions because of its flexibility and very precise levels of accuracy.

In warehouses, companies often need to know the precise and real-time location of items, both stationary and in motion. Common warehousing applications such as worker safety, collision avoidance, inventory management and advanced workflow optimization all require knowing the precise location of people and objects. With an RTLS that utilizes Bluetooth technology and the angle of arrival methodology, businesses can uniformly cover the area of interest with locators, so that the system can reliably calculate the accurate location of tags in real time.

Having this level of accuracy, and the flexibility to also support presence and proximity, satisfies nearly all of today’s use cases as well as many potential new ones; some of which may not have been thought of yet because of the limitations of location technology and cost considerations.

An additional technology being used for use cases that demand high levels of location accuracy is Ultra-wideband (UWB). However, UWB is only useful for high-accuracy use cases. This is because it cannot be scaled down technology-wise, or even cost-wise to cover proximity and positioning requirements, limiting its effectiveness as a solution that covers all use cases. The high cost of tags and the limits to radio certifications across different geographical areas also limit UWB’s usability as a positioning solution.

The warehouse example showcases the reach, flexibility and accuracy of the different types of solutions available for determining location. Organizations of all types and sizes, from manufacturing and supply chain and logistics to healthcare and retail, have already established a wide variety of use cases where knowing the location of an object — or person — provides tremendous business value. The specific level of accuracy they need depends on the use case — or use cases — they need to support across the business; today and in the future. An RTLS that covers all levels of accuracy, multiple use cases, and is scalable for future growth provides the most attractive TCO and best return on investment.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 4, 2019  5:58 PM

Products as a service is a top priority for manufacturers

Sean Riley Profile: Sean Riley
IIoT, IIoT design, IIoT strategy, Industrial IoT, IoT strategy, smart manufacturing

North American manufacturers that want to pursue multiple industrial internet of things (IIoT) initiatives and scale the results throughout their organizations often face significant challenges, according to a new independent survey of more than 125 manufacturers primarily in heavy industry and automotive sectors.

The survey from Software AG also found that manufacturers have objectives that span both products and production, but are unable to reach their predicted value because of known and unknown complexities.

IIoT offers new revenue if organizations can overcome obstacles

Organizations clearly prioritize new revenue generation for IIoT projects, with 84% of automotive and heavy industry manufacturers from the survey having selected the monetization of products as a service to be the most important area of IIoT. Optimizing production is also viewed as a top priority for 58% of heavy industry and 50% of automotive manufacturers. Historically, the primary use for IIoT has been predictive maintenance, but in this survey, respondents did not view it as important as monetization or operational optimization.

This is because the vast majority of manufacturers report that their IIoT investments are not adding data or value to other parts of the organization. Despite the fact that 80% of all respondents identified that processes around IIoT platforms need to be optimized to stay competitive, very few are doing this. These organizations face obstacles in obtaining and sharing IIoT data that make optimization difficult. Fifty-six percent of automotive manufacturers consider IT/OT integration as the most challenging task that has prevented them from fully realizing the ROI from IIoT investments.

Analytics are equally difficult to use

More than 60% of the manufacturers surveyed had as much difficulty with defining threshold-based rules than using predictive analytics. This means that simple if-then statements, which any associate can create, are giving organizations more trouble than predictive analytics, which rely on complex algorithms that require the expertise of a data scientist. While neither task is considered simple, leveraging predictive analytics was rated as only slightly more difficult than condition-based rules.

A comparison of how difficult organizations consider defining threshold-based rules versus using predictive analytics. Source: Software AG

Manufacturers place a high value on IIoT, but cannot spread their existing IIoT innovations and investments across their organizations. They can solve this dilemma by investing in the right IT/OT integration strategy and IoT technology. Manufacturers can use four best practices to scale their IIoT investments across their enterprises.

  1. Ensure clear collaboration between IT and the business by following a transparent step by step approach that starts focused and has clear near term and long- term objectives to scale.
  2. Give IT the ability to connect at speed with a digital production platform that is proven to be successful.
  3. Leverage a GUI-driven, consistent platform that supports all potential use cases and an ecosystem of IT associates, business users and partners.
  4. Enable the plant or field service workers to work autonomously without continual support from IT through GUI-driven analytics, centralized management and easy, batch device connectivity and management.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: