IoT Agenda


January 15, 2020  2:15 PM

 The rise of thing commerce and what it means for software development

Antony Edwards Profile: Antony Edwards
connected devices, Ecommerce, Internet of Things, iot, IoT and AI, IoT devices, IoT software, IoT strategy, Thing Commerce

As connected devices continue their explosive growth, thing commerce is moving from niche to mainstream. So, what exactly is thing commerce? Thing commerce is where connected machines, such as smart home appliances and industrial equipment, will make buying decisions for people by either taking direction from customers or by following a set of rules, context and individual preferences. Thing Commerce will ultimately include buying things, reporting a problem, requesting services and negotiating a deal.

The rise of Thing Commerce will see more companies and consumers interacting with virtual assistants in smart appliances who can make purchases on their behalf. The days of remembering to buy milk and ensuring that fresh produce is not past its sell-by date will be eradicated. These tasks will all be handled by interconnected machines that will deliver a frictionless commerce experience for customers.

However, before this utopian reality can happen, businesses need to rethink how they develop and test the software and systems to support this new age of commerce. There are three core elements that thing commerce providers must embrace as they build and deliver software and applications:

Test the user experience

With Thing Commerce, there are multiple products and services composed of a variety of technologies from an array of vendors. As a result, development teams across the ecosystem need to reorientate from focusing on testing code compliance to understanding the actual user experience.

Embracing a user-centric approach to testing ensures you identify errors, bugs and performance issues before they have the chance to impact the user experience. This requires adopting an intelligent test automation platform.

Intelligent automation and bug hunting are mission-critical

The only way to truly test the Thing Commerce ecosystem from the user perspective is to utilize an intelligent automation engine. Intelligent AI-driven automation creates a model of user journeys and then automatically generates test cases that provide thorough coverage of the user experience, as well as system performance and functionality.

In addition, the AI algorithms hunt for errors in applications based on user journeys automatically generated from this bug-hunting model. This approach enables teams to quickly find, identify and address problems before release.

Continuous testing, continuous learning and predictive trends

Testing any digital experience is not a one-and-done exercise. It must be a continuous process so that you’re monitoring the digital experience over time. An AI algorithm will watch test results, learn and look for trends. The learning algorithms will enable predictive analytics. For example, it can identify if the increasing delay in a particular workflow is likely to result in the connected system failing to replace out-of-date produce before the family meal.

Final thoughts

Thing Commerce promises a world of possibilities that will free people from many mundane chores. However, for Thing Commerce to realize its potential, it’s essential that organizations change the way they develop software to ensure it delivers a consistent digital experience that delights customers. If not, Thing Commerce might erroneously deliver 20 bottles of milk to a smart fridge, delighting no one and incurring a lot of friction along the way.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

January 13, 2020  4:20 PM

Operational IoT must be seen to be secured

Reggie Best Profile: Reggie Best
Endpoint devices, Endpoint management, Firewall security, Internet of Things, iot, iot security, Operational technology

By correlating a comprehensive understanding of your enterprise’s active IP address space against known threats as new data becomes available, including IoT and OT endpoints as they are connected to the network, you have intelligence you can act on.

A big barrier to effectively securing IoT and operational technology devices is simply not knowing they are there. Lack of visibility has been a recurring theme in FireMon’s annual “State of the Firewall Report,” as has managing complexity. The report doesn’t even begin to dig into the impact of IoT growth.

This year’s survey found that 34% of respondents reported having less than 50% real-time visibility into network security risks and compliance. From a firewall perspective, respondents are dealing with a lot of complexity – nearly 33% reported having between 10 and 99 firewalls in the environment, while 30.4%reported having 100 or more. Additionally, nearly 78%are using two or more vendors for enforcement points on their network, while almost 60%have firewalls deployed in the cloud.

Given the challenges firewalls create for security professionals, you can imagine how the exponential growth of IoT endpoints are compounding complexity. This is partially because they behave differently, and in turn, they must be onboarded and managed differently.

IoT and operational technology endpoints are driving enterprise network growth

IoT visibility has become a crucial area in the security market, and more traditional vendors — including Palo Alto, Checkpoint, Forescout and Cisco — are responding accordingly by acquiring IoT expertise and operational technology (OT) know-how.

As data center workloads migrate to cloud computing and infrastructure-as-a-service delivery models, a significantly larger percentage of the enterprise network will be comprised of IoT and OT endpoints. Previously siloed systems — such as security cameras and sensors, turnstiles, badge readers and even building control systems — mean IoT and OT is converging with more traditional enterprise endpoints such desktops, laptops and servers into a single, fluid IP-based infrastructure.

With everything on one network that no longer has a clear and defined perimeter, threats can easily migrate between the smarter, evolving OT areas into the IT domain, which makes visibility more essential than ever.

You must have visibility to manage IoT and OT complexity

Obtaining the required level of visibility demanded by an environment populated by IoT and OT endpoints requires automation, something FireMon’s report also identified as a frequent pain point for respondents.

IoT and OT endpoints demand automation, both from an initial discovery perspective and from an ongoing status perspective. Given the nature of the devices, endpoints such as security cameras and turnstiles can be added in large volumes at once or on a piecemeal basis. Devices must also be checked regularly to ensure they are operating normally.

Visibility means having a consistent view of all these endpoints, including basic characteristics such as connectivity and device function. It also means understanding the infrastructure it’s connected to and how even a simple OT device is in a position to affect more complex IT operations if it’s not properly provisioned from a security perspective. The way these devices connect to the network can open unexpected and unwanted paths into the heart of the organization. All it takes is one leak to drastically affect security and compliance posture.

In order to achieve adequate visibility, IT admins must see IoT and OT endpoints being onboarded in real-time so they can automatically apply global security and segment devices. This is necessary to limit the negative impacts of any anomalous activity, which must be easily detectable for security teams to proactively respond.

Establishing visibility of IoT and OT endpoints as part of the broader IT landscape enables you to begin tackling the unique complexity they bring to the network.

Diversity and variety compounds complexity

The diversity of IoT and OT endpoints should not be underestimated. In the same way multi-cloud environments add to complexity and confusion over shared security responsibility, there are new device behaviors security professionals must be ready to handle.

Just as servers, desktops, laptops and smartphones can all begin to misbehave and pose a threat to the corporate network, so can the many IoT and OT devices that are added to fluid IP infrastructure. The failures and glitches of more traditional hardware tend to be par for the course for security teams, but the variety and diversity of IoT and OT endpoints get more complicated, especially when they function in an unexpected way.

The consequences of these endpoints being compromised have significant ramifications. In the healthcare realm, the devices can be lifesaving, and in many other scenarios such as energy generation and delivery, water and waste management, and traffic control, their security is paramount to keeping people safe and maintaining quality of life for entire communities.

Because most of these endpoints are embedded, enclosed devices, often the ability to secure them using agents — as with more traditional IT endpoints — is somewhat limited. This means visibility, discovery and management must be much more network centric. In the same way IT security teams have evolved to manage and monitor hybrid cloud environments, IoT and OT endpoints have further diversified the environment to create a more dynamic infrastructure.

A unified view requires a network-centric approach

Reducing complexity and increasing real-time visibility means having a single platform that will discover, monitor and remediate when necessary — not just cloud, virtual, physical and software-defined network infrastructure, but also the proliferating IoT and OT endpoints that merge with traditional IP infrastructure.

A network-centric approach solves the IoT and OT device conundrum because it discovers and monitors the cloud accounts, network paths and endpoints inherent to traditional IT infrastructures. It also watches for changes in real-time to identify new leak paths that might be created by IoT/OT environments. By correlating a comprehensive understanding of your enterprise’s active IP address space against known threats as new data becomes available, including IoT and OT endpoints as they are connected to the network, you have intelligence you can act on.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


January 9, 2020  4:30 PM

Positioning IoT to profit through product packaging

Cris Wendt Profile: Cris Wendt
Internet of Things, iot, IoT business model, IoT data management, IoT monetization, IoT products, IoT strategy

This article is the fifth in a six-part series about monetizing IoT. Find the previous article here.

A few common approaches can be used to bring products to market to create diverse revenue streams within IoT. So far, this series about monetizing the IoT stack has defined various elements of the stack that can be monetized, introduced the three dimensions of monetization and described two of the three dimensions of the monetization framework for IoT: monetization models and monetization metrics. This article will address the third dimension of the monetization framework for IoT, known as product packaging.

Product packaging is the methodology for partitioning product functionality to bring different products to market with a variety of offerings, such as a single product leveraged by different SKUs to generate multiple revenue streams. This is another area where art meets science. Product management can be creative in meeting market needs and revenue goals with different approaches to portioning products’ functionality.

Product packaging examples

Product packaging can be a very broad topic, so I will identify a few, common approaches. For example, imagine a fictitious IoT utility equipment provider, Sensorytics. Sensorytics has a SaaS analytics platform that’s sold to various municipalities throughout the world.

The analytics platform ingests utility smart meter information, such as gas, water and electricity, and stores and uses that information to perform various analytics, reporting and alerting. In terms of relating this to the IoT stack, we’ll concentrate on the cloud aggregation and analysis part of the stack, but the principles apply to each level of the stack as well as to multiple elements of the stack combined into larger bundled offerings.

The analytics platform can provide value to the different types of utilities that a municipality offers. The product manager at Sensorytics initially decides to create three multiple offerings, one for each utility:

Market-based structures

The initial step is to create a base analytics package for each of the three different types of utilities, which might see different value from the basic offering that provides a slightly different set of basic reports that are specific for that market. Initially, the product manager has three offerings:

  • Analytics for Gas Management
  • Analytics for Water Management
  • Analytics for Electricity Management

Persona-based structures

Digging deeper into the gas market, the product manager identifies three different, additional specialized offerings to address three different personas who see different values based upon a different analysis and set of alerts provided. Upon release, the product manager has three additional offerings:

  • The finance department manager of the utility company is interested in a Gas Analytics for Billing to focus on gas consumption for billing services.
  • The service department is interested in Gas Analytics for Services to identify possible problem areas in the gas lines in order to proactively react to problem areas that have been identified.
  • The marketing department is interested in Gas Analytics for Usage in order to optimize pricing based upon usage patterns.

The Product Manager decides that the offerings will be structured in such a way that the gas utility company must first buy the Analytics for Gas Management before they can buy the more specialized persona-centric offerings.

Journey-based structures

As customers begin to use the Analytics for Gas Management offering, they start to ask Sensorytics for increased functionality so that the product matches the users’ workflow. Building on the persona-based model described above for gas utilities, the service department decides that it desires richer functions and a wider footprint. This leads to additional products that are available to the service department; the three additional products, which match the workflow of that department, bring the total number of offerings to nine.

Gas Analytics for Services Dispatching. This is a reporting engine that is used by the services department for dispatching service personnel to various locations based upon where the analytics platform identifies problems areas.

Gas Analytics for Services Timecards. This is an extension to the basic analytics platform that now inputs service personnel timecards to aid in services billing.

Gas Analytics for Services Optimization. Used to aggregate information gathered from the utilities end-points, this is combined with services personnel location and expertise to design an optimum workflow personnel utilization plan to maximize coverage and minimize services costs.

Best practices

As you can see, there are a variety of different offering structures that can be utilized to create multiple revenue streams from a single product or platform to match different market, persona and work-flow requirements. This can be further extended to create different bundles of individual offerings or to create offerings as a combination of product function and product metrics.

One word of caution is to keep models relatively simple. Avoid SKU explosion, which is characterized by a situation where 20% of the SKUs generate 80% of the revenue. Balancing the simplicity of the offering with revenue maximization is part of the art of product packaging.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


January 8, 2020  4:31 PM

What 2019’s ‘summer of outages’ means for IoT

Mehdi Daoudi Profile: Mehdi Daoudi
Cloud outages, Internet of Things, iot, IoT analytics, IoT cloud, IoT cloud strategy, IoT connectivity, IOT Network, IoT strategy

In the summer of 2019, several cloud service providers experienced nagging bouts of unplanned downtime, impacting thousands of businesses. Google had an outage in June, which brought down several of its most popular services including Search, Nest, YouTube and Gmail, and was hit by another major outage in early July. Apple also experienced a widespread cloud outage in July, which affected the App Store, Apple Music and Apple TV. Cloudflare, Facebook and Twitter also had problems.

The macro conditions underlying these outages can often be boiled down to increased internet complexity or rushed-to-market software releases. While IoT was not involved in these recent problems, the implications of such unpredictability are significant for any cloud reliant IoT project, especially those with lives and safety depending on them.

This is because IoT and cloud computing are increasingly intertwined and symbiotic technologies. IoT devices generate huge amounts of data, with the cloud often serving as the central data collection and analysis repository. For example, consider a large multinational enterprise with IoT-connected thermometers across hundreds of factories, each one constantly generating data for analysis. These thermometers might be connected to other IoT devices and services, such as a factory manager’s remote, smartphone-based thermostat app. All of this requires superior speed and availability to work, making the recent spate of outages a major cause for concern.

Industrial IoT applications like this are just the tip of the iceberg. It’s one thing for factory thermometers to go down due to the cloud, but what happens when IoT is managing something even more critical, such as hospital systems and equipment?

The recent outages shouldn’t dissuade IoT projects from leveraging the cloud because, in many cases, the cloud offers higher levels of security, reliability and delivery speed than organizations can deliver themselves. But it does mean these organizations must be discerning and proactive about protecting themselves, especially if their IoT applications are mission critical.

Monitor the cloud yourself

Assurances from a cloud provider regarding availability and speed, round-trip time of packets traveling to and from your connected IoT devices, can give some peace of mind and a sense of the provider’s overall infrastructure health. However, this should be considered supplemental information only and cannot be relied upon exclusively for ensuring that IoT device connections are reliable and fast.

This type of direct to the cloud and back type of monitoring is not necessarily indicative of reality. Cloud service providers have partnerships with internet service providers (ISPs) and better network intelligence on how to route traffic. This means that, whenever possible, cloud monitoring will bypass the broader internet infrastructure that IoT device data must traverse, keeping packets in transit on their own networks and optimizing speed from point A to point B, and vice versa. This can result in a skewed, overly positive sense of IoT device connectivity and communication speed, because in the real world, networks and other external elements can get in the way.

Don’t track the cloud from only the cloud

While it’s critical for organizations with cloud-dependent IoT projects to do their own monitoring, they should never monitor the cloud from cloud-based infrastructure only. For the reasons outlined above, you might get a warped view of actual performance. Never monitor the cloud from the same cloud provider that’s handling your IoT project. If this cloud goes down, you’ll be blind to how your co-located IoT system is doing. You must make certain your monitoring vantage points are a mix of backbone, ISP, wireless and other node types.

Monitor IoT device availability

In an IoT world, devices essentially are the end users, so it is important to consistently monitor them and ensure they are reliable and interoperating with other IoT devices with exceptional speed. Since cloud service providers’ infrastructure consists of datacenters and other servers spread across the globe, a problem can occur anywhere and impact isolated segments of IoT devices.

That’s why it is critical to have as many eyes as possible, in all the key geographies where you have IoT devices running, as well as from the various network vantage points through which your IoT devices connect to the internet. This will put you in the best possible position to proactively detect IoT outages or slowdowns. Combining this with deep analytics will give you a head start in addressing the problem, whether it’s related to the cloud or not.

Have redundancy plans in place

If your IoT project supports a mission-critical process, you should consider having a multi-cloud strategy as a form of backup and protection. This might require a good amount of work, but it’s often worth it. You’ll need to make sure all the key phases of an IoT project, namely real-time data processing and storage, can be quickly ported over to another cloud in the event of primary cloud failure. This means testing failover strategies in advance to ensure cloud-to-cloud interactions are fast and reliable enough to support real-time data replication.

Final thoughts

The cloud has many attributes that make it ideal for supporting IoT projects. Not surprisingly, growth in IoT data has led to cloud service provider growth and expansion, which supports more IoT data and projects. Together, the cloud and IoT represent a set of inextricably linked technologies of the future.

But organizations running IoT in the cloud must proceed with caution. If we’ve learned anything, it’s that even the strongest businesses in the cloud industry can — and inevitably will — go down. It’s up to you to take the steps needed to better prevent your IoT project from going down with them. In fact, this is something you can’t afford not to do.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


January 6, 2020  12:19 PM

It’s tough living on the edge

Gordon Haff Profile: Gordon Haff
Cloud management, Edge computing, Internet of Things, iot, IoT cloud, IoT data management, IoT edge computing, IoT strategy

The past couple of years have seen computing pushed out to the edge of the network at an ever-faster pace. The details vary as there are many different “edges” depending upon what problem is being addressed. But the overall trend is clear. By 2023, over 50% of new enterprise IT infrastructure deployed will be at the edge rather than corporate datacenters, up from less than 10% today, according to IDC.

High volume data streaming in from IoT devices, which often must be quickly processed, filtered and acted upon, is one driver of edge computing. But there are an increasing number of other application areas, such as telco network functions, which are best optimized by placing service provisioning closer to users and devices.

None of this is a repudiation of cloud computing, but it does illustrate how assumptions that computing was on a path to wholesale centralization were simplistic at best. In practice, enterprise computing is highly heterogeneous, and organizations are mostly pursuing hybrid cloud approaches.

Although distributing pushes compute out to where data, users and devices live has advantages, it also introduces challenges relative to centralized computing. These fall into three general categories: architecture and technology, ongoing operations and security.

Architecture and technology

The scale of some edge deployments is such that they can use similar software used in datacenters. For example, OpenStack is popular among telco’s to create private clouds at the edge just as it is for creating private clouds in a more traditional on-premises environment.

However, even if some of the software stack is common, edge installations must take several unique considerations into account. For example, you can’t just flip a switch and add more servers from a central pool if more capacity is needed at the edge.

It’s important to plan for the needed compute, as well as storage, networking and any other hardware, up-front. Upgrading hundreds or even thousands on edge sites is an expensive undertaking. At the same time, the cost of over-provisioning those hundreds or thousands of sites adds up quickly too. The lesson here is that you must design deliberately.

As noted earlier; the edge can look very different depending upon the application. An appropriate architecture for hundreds of clusters with tens of servers each differs significantly from one another for thousands of smaller endpoints, much less one that’s made up of millions of individual edge computing devices.

Operations within large distributed systems

There are also practical issues related to operating a large distributed system. All those edge clusters might be installed in locations that don’t have an IT staff and might even be in places with no permanent human presence at all.

We need to account for the fact that this is a distributed system connected by potentially unreliable and throughput-constrained networks. How do we want an edge cluster to behave if it loses its connection to the datacenter? If disconnected operation makes sense, the system needs to be designed with that in mind.

We also need to deal with failures within the edge cluster itself. Failures are a normal expected event at scale. We must provide redundancy while also considering cost tradeoffs. Is it cheaper to install some extra hardware so that repairs can be made mostly on a slower-paced scheduled basis? Or are we better running leaner and treating failures as an urgent event?

Site management operations, such as deployments and upgrades, must be handled remotely and be fast, reliable and automated. Good monitoring and logging are required for centralized management to work at all. Effective analytics can also help to predict failures and thereby head off some problems before they occur.

Edge computing security

In some ways, security is a subset of operations in the context of edge computing, but it’s important enough that it’s worth calling out separately. I’ve written previously about IoT device security specifically, but edge computing as a whole also has some specific security challenges.

The scale of many edge computing installations means that the automation mentioned above must apply to security as well. Automating patching and security scanning is a good practice. But using automated tooling that enforces security policies and minimizes potential vulnerabilities at distributed sites is essential.

The edge has other unique considerations. In general, datacenters have established robust physical security practices around controlling access to the hardware, properly disposing of assets, such as disk drives that may contain sensitive information, and generally providing a highly engineered and controlled environment.

This is often not the case with edge clusters. Branch office and other remote systems have had to take these factors into account for a long time. However, edge locations might not even have the level of controls that a bank branch or satellite company office does. And the scale can be much greater.

Plan and plan again

One could argue that there are relatively few challenges that we see in edge computing that we don’t also see — to greater or lesser degrees — elsewhere. But that’s the rub: We see requirements for failure resiliency and automation everywhere. But dealing with them at the edge, where both distribution and scale are so great, can be especially challenging. For example, once a fix is identified, rolling it out to every edge cluster is probably a much more significant task with more failure modes than in the case of centralized infrastructure.

The above example further highlights that edge architectures need to be carefully planned. This includes up-front design work that considers the practical realities of a highly distributed system that exists largely outside of controlled datacenter environments. But it also includes deliberate planning for the on-going operations on the entire system, including provisioning, failure recovering, upgrades and security.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


January 3, 2020  1:07 PM

Are we truly ready to support IoT’s impending effects?

Gil Rosen Profile: Gil Rosen
Connectivity, Internet of Things, iot, IoT connectivity, IOT Network, IoT sensors

When it comes to connectivity, we’ve been conditioned to think mobile-first, where a service runs on a given data package or Wi-Fi connection. But we need to change this mindset. By 2023, the worldwide number of IoT-connected devices is predicted to increase to 43 billion, three times more than 2018 and well beyond mobile’s growth.

With this dramatic increase in IoT, there is an exponential hidden change in everything that relates to it, and we need to start thinking beyond just mobile. The question is – are we ready to support this?

Going back to the drawing board

The average person will have more than 13 network connected devices by 2022, which is why things will have to change. These different IoT devices belonging to one user or household will come with different types of plans, connectivity options, monetization models, operating systems and onboarding experiences. A dumb device approach — where something connects with no real care for a consistent practice — will cause a wide range of problems.

When IoT has its breakthrough, connectivity providers need to be ready with a place in the value chain beyond just connectivity and back-end operations. This means onboarding, security, and quality assurance, tailoring IoT services to different verticals and target audiences. When we turn IoT sensor signals in a physical environment into insights, we achieve the next level in the connectivity revolution. We effectively digitize the physical world. This is where opportunity awaits, and data intelligence will be critical for its success.

What are connectivity players overlooking at present?

The IoT revolution will not only be about big consumer electronics manufacturers but smaller, niche players in the ecosystem. Connectivity providers can’t be in a position where they are only working with a handful of device manufacturers and cutting themselves out of revenue opportunities. They need to be a critical enabler, not a holdup.

So, what’s the best approach for connectivity players when it comes to devices? Ensuring they define the customer experience during the onboarding process and the ongoing service itself. This is especially important as device manufacturers have more influence than ever over how profiles are provisioned when it comes to IoT.

The most important takeaway here is this: the entire history of our industry has been viewed through a mobile device keyhole, where a service is provided with one price plan or data package. This is about to change dramatically. The connectivity providers who start planning now for the next era in our industry will have a competitive edge.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


January 3, 2020  10:55 AM

Private 5G networks will play a pivotal role in Industry 4.0

Ben Pietrabella Profile: Ben Pietrabella
5G, 5G adoption, 5G and IoT, Internet of Things, iot, IoT wireless, private 5G

While many business verticals are looking to 5G technology to boost bottom lines and improve customer interactions, perhaps none is doing so more than the manufacturing sector — notably IoT — driven largely by the promise of the digital industrial revolution known as Industry 4.0. As such, industrial enterprises are increasingly considering deployment of private 5G networks.

A key feature of private 5G networks is the release of unlicensed spectrum, which enables companies to operate a private network without going through a mobile operator. This flexibility is driving mobile carriers to develop unique strategies for attracting industrial enterprise clients. Some operators are leasing their own spectrum to support private enterprise networks, while others are developing private wireless networks that are then sold to enterprise customers.

Regardless of how they’re built, the fact is that private networks are growing in popularity. Last year, private LTE and 5G networks accounted for some $2.5 billion in spending. With a projected CAGR of about 30%, the market is expected to surpass $5 billion by the end of 2021, according to market research firm ReportsnReports.

A number of factors are fueling this growth, including digital transformation initiatives and rising demand for highly reliable, secure wireless communications. Moreover, IoT, which underpins Industry 4.0, is driving new connectivity requirements for productivity, efficiency and quality of service (QoS).

5G’s technological advances are also propelling the growth of private networks. 5G provides enhanced broadband and throughput, ultra-reliable low latency and massive capacity for IoT. It also supports network slicing capabilities and low-power wide-area networks to support a full and cost-effective ecosystem necessary for effective industrial private networks.

Additionally, 5G enables easy upgrades, access and control via SIM card based credentials, and it enables simple interconnectivity between different technologies. 5G new radio will deliver guaranteed real-time response that’s critical for things like closed-loop motion control operations and remote robotics management. What’s more, 5G enables immense densification and IoT connectivity support, with guaranteed QoS for industries that have hundreds of thousands of sensors in relatively small areas.

Why go private?

Industrial enterprises face a number of challenges that private 5G networks could help address. Many rely on wired connectivity in their production facilities. However, such networks suffer from poor flexibility, scale and remote connectivity. In addition, current manufacturing networks depend on Wi-Fi, which isn’t reliable for mission-critical, always-on connectivity.

Equally important is the fact that carrier roadmaps and network timelines are not aligned to the needs of Industry 4.0. Instead, communications service providers are mainly focused on consumer services and have limited knowledge of the connectivity needs of advanced industrial networks. Consideration must also be given to the complexity of networks and the associated costs to companies that want to deploy them.

In contrast to wired networks, private 5G networks are managed locally and have dedicated equipment to provide local coverage that’s optimized for local services. The networks are optimized and tailored for industrial applications, especially those with stringent QoS and reliability demands.

Additionally, private 5G networks are dedicated and independent, ensuring data privacy and improved security. Such networks are driven by CIOs who control the technologies and digital transformation roadmap. Overall, private 5G networks provide control and flexibility by leveraging network slicing, vast bandwidth, light costs and low latency.

Last but not least, private 5G networks ensure a rapid return on investment — generally less than three years — and improved time to market for new products. Open architecture and cloud-based deployment serve to future-proof the enterprise platform, while also promoting product revenue growth.

How can organizations use private 5G networks?

Although private 5G networks are still in their infancy, a number of key use cases have emerged for manufacturing, including automated operations, track and trace, remote robotic manufacturing, predictive maintenance, remote product support and the integration of supply chains and inventory management.

Industrial facilities that are particularly well suited to take advantage of private 5G networks include:

  • Shipping ports
  • Transportation hubs
  • Distribution warehouses
  • Upstream and downstream oil and gas operations, as well as oil and gas transport
  • Surface and underground mining operations
  • Process manufacturing, hybrid manufacturing and discrete manufacturing plants
  • Hospitals and labs
  • Power plants
  • Water treatment plants

A number of live trials of private 5G networks are already underway. In one trial, KPN and Shell have partnered to create a 5G network at the Port of Rotterdam for the preventative maintenance of almost 10,000 miles of pipelines. By combining ultra-high-definition cameras, the 5G network and machine-learning algorithms, maintenance of the pipelines is better predicted, and engineers receive needed information about the system on tablets that support augmented reality.

Siemens and Qualcomm recently announced that they have implemented what they claim is the first private 5G standalone network in a real industrial environment. BASF, a producer of chemicals, is also developing its own ultra-fast 5G network at its primary plant in Ludwigshafen, Germany. Volkswagen and Daimler also plan to independently create private 5G networks.

Challenges to overcome

Deploying a private 5G network isn’t without its challenges regardless of who owns it. Companies must consider the right business model for them. They can rely on a systems integrator to design and deploy the network or partner with a carrier that can outsource the solution to them. Either way, business case, network design, integration and deployment and operations must be defined in detail. The solution needs to reflect the enterprise’s IoT strategy, meet its objectives and leverage existing technologies. This will lead to improved service and lower ownership costs in the future.

There is no doubt that the next couple of years will be exciting ones for industrial enterprises looking to 5G to improve processes, increase productivity and enhance customer interactions. Private 5G networks will play an increasing role in making the promise of the Fourth Industrial Revolution a reality.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 30, 2019  2:50 PM

Understand the basics of multi-tier wire bonding

Zulki Khan Profile: Zulki Khan
Internet of Things, iot, IoT device makers, IoT hardware, IoT PCB, IoT strategy, PCB design, Printed circuit boards, wire bonding

Electronic systems and equipment OEMs have historically pushed for greater functional integration. This means putting more electronic power on increasingly smaller chips, as well as making sub-assemblies even smaller. All this goes under the banner of increasing performance and decreasing cost. This is truer today for IoT devices than ever before.

Wire bonding plays a big part in these evolving technology trends

Wire bonding, which is the process of connecting a chip to its associated sub-assembly or printed circuit board (PCB), represents a key portion of some of IoT devices’ overall electronics operation. Traditionally, a single wire bonding operation has been used to connect a chip or die to the PCB or substrate.

However, we’re now seeing customers earnestly investigating considerably greater functionality for their next generation IoT products. That means deploying multi-tier wire bonding applications that utilize the same substrate and die real estate.

As the name implies, multi-tier wire bonding ranges from two to four or more sets of wire bonds connecting a highly complex bare die or chip to the PCB or substrate. Single tier wire bonding has reached a high level of efficiency and reliability on the PCB assembly and manufacturing floor.

But there are challenges when it comes to multi-tier wire bonding. For the IoT device OEM taking the multi-tier wire bonding route, it’s best to rely on EMS providers that have solid footing in this emerging technology. That means the OEM must know that multi-tier wire bonding offers them a solution when the number of I/O’s are far beyond the traditional I/O’s that are used in the single wire bonding application.

In multi-tier wire bonding, the different rows of wires are isolated by maintaining a different loop height for each row. This creates a vertical gap between the rows of wire from first row to the second, third and fourth row

Multi-tier wire bonding increases the capacity and the capability of a bare die. If wire bonding is double stacked, I/O capacity is doubled by adding the second set of wire bond pads. If you go on to a third-tier wire bonding, you’re increasing the capacity of I/O’s by three times. Meanwhile, the bare die remains the same. Essentially, you are extracting more functionality out of the same die.

Multi-tier wire bonding challenges

However, there are a few challenges with multi-tier wire bonding. With multi-tier wire bonding, wire bonder precision is critical. You must ensure that the first row of wire bonding is the lowest in height; the second higher than the first; the third higher than the second; and the fourth must be the highest.

A third and quad tier wire bonding machine demands a well-trained operator, top notch precision and process control, exact calculations, and computational knowledge of wire size and bonder restrictions — including x, y and z directional restrictions — among other key requirements. Also, wire looping must be correctly calculated. If it’s not, wiring is prone to sag. The result is the creation of shorts with other rows of wires.

For example, if a third tier of wire bonding is incorrectly performed and its sagging, it creates the possibility of a short with the second tier of wire bonding. Re-working this problem is difficult because wire bonding can only be re-done two or three times. That’s because the pad is worn down after those two or three re-working attempts.

Another major challenge involved with multi-tier wire bonding is wire pull testing (WPT). This PCB microelectronics assembly step focuses on wire bond strength and quality. It involves applying upward force under the wire to be tested, and WPT is applied on every tier. If two to three tiers are not tested properly at the right time, it becomes a challenge to test without damaging certain wire tiers. Therefore, expertise and savvy technicians should know top requirements to assure effective WPT. Each tier of wires must be pull tested before the next tier is bonded.

The future of multi-tier wire bonding

Multi-tier wire bonding is one PCB microelectronics technology that will gain greater interest for newer, smaller OEM products across multiple industries. For example, greater functionality at lower cost is much sought after in medical electronics, IoT devices, wearables and other portable gear.

The savvy IoT device OEM and its product designers will take into account several considerations when seeking guidance for multi-tier wire bonding, which includes optimal design and size of the pad. Pad pitch becomes important, as does the loop height and length of the wires. In addition, the staggering pitch is important, which is basically the pitch from one row to the second to the third, and so on.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 27, 2019  11:29 AM

How to determine the top IoT usage and data industries for 2020

Lewis Carr Profile: Lewis Carr
Internet of Things, iot, IoT data, IoT devices, IoT management, IoT strategy, IoT trends

If you’ve seen the film Casablanca, you’ll remember one of the final scenes where Captain Louis Renault turns to Rick Blaine and says, “round up the usual suspects.” As we head into 2020, the mystery of which industries will show the greatest penetration of IoT usage and value begins to unfold. However, this question has popped up annually for the last decade, and with several analyst firms and magazines publishing overlapping lists, it’s easy to round up the usual suspects: manufacturing, healthcare, and transportation and logistics. Or is it smart cities, retail, and media and entertainment? How do you actually decide?

Defining the usual suspects

Let’s examine what rounding up the usual suspects means. To determine these top industries, we want to look at the characteristics or traits that warrant something being a great application for IoT and the need for data at the edge. Based on my previous experience as an embedded systems engineer, I believe the answer boils down to business and technical virtual teams asking themselves the following questions and generating clear, positive answers to them:

  • Will the investment in IoT quantitatively reduce my costs or enable me to generate more revenue?
  • Can I model out my edge in a way that enables me to identify how applications of IoT with associated intelligence at this edge will generate positive outcomes for the answer to question one?
  • Can I apply the appropriate process orchestration, rules and exception handling, and supporting data processing and analytics to implement or improve automation, machine-to-machine interface and decision support in real-time to what I’ve modeled in the answer to question two?
  • Can I implement the infrastructure necessary, such as 5G networks, edge data management, bolt-on intelligence to brown-field environments and implementation of select machine learning inference models out at the edge to support a portfolio of projects addressing the answer to question three?

Identifying the usual suspects

If you can create positive workable answers to all four of these questions, you are ready to line up your suspects for positive ID by the eyewitness. In other words, look for the right IoT projects in the right industries. I would argue that you should look for the following:

  • Real-time edge decisions and actions. These decisions can make or break your business. For example, prescription drug packaging, product defects on an assembly line, package routing on a conveyor belt, and many others.
  • Remote field environments. This is where bandwidth for connection to the cloud or other centralized IT resources is at a premium, or those connections experience periodic disconnection either unexpectedly or by design that are core to your business, such as airline flight services, transoceanic cargo shipping, certain defense and intelligence cases.
  • Local data processing. Additional processing of IoT devices and the data they generate by software that leverages resident software containing artificial intelligence, which can then be applied in an evolutionary, iterative way across multiple projects. For example, condition monitoring leading to preventative maintenance, then turning into predictive maintenance that allows for the same data and underlying infrastructure to support digital twinning.

With this methodology, you can make your own predictions as to which industries have the greatest application of IoT devices and data at the edge. For me, the manufacturing industry tops the list for 2020.

However, it’s not all sub-verticals in manufacturing. In fact, I would say Heavy Industry and High-Tech manufacturing. To get even more granular, I’d go with manufacturing that has an Industry 4.0 roadmap towards digital twinning. Similarly, I would say transportation and logistics would be on my list, with a focus on streamlining and accelerating accurate package delivery. With these characteristics in mind, you can then revisit the usual suspects in a more granular and measured fashion.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 27, 2019  10:45 AM

Are there only three IoT use cases?

Ken Figueredo Profile: Ken Figueredo
enterprise data hub, Internet of Things, iot, IoT data, IoT data management, IoT devices, IoT strategy, IoT use cases, middleware

The fifth edition of IoT Solutions World Congress took place in Barcelona a few weeks ago. During it, two IoT industry heavyweights made some noteworthy comments on the state of the IoT market.

There are only three IoT use cases, according to Vodafone’s head of IoT development, Phil Skipper. One of these is asset tracking. A second is remote monitoring. The third, which depends on 5G capabilities in Release 16 of the 3GPP standard, is ultra-reliable industrial control.

The Industrial Internet Consortium’s (IIC) CTO, Stephen Mellor, spoke about technology hype and the value of sharing end-user stories. Ideally, these should go beyond ‘proof of concept’ profiles to show how technology works and proves its value.

Mellor was critical of the 10 years that it took the telecoms industry to define a framework for hardware, transport and application layers. While concluding that the “high level industry stuff and low-level connectivity” are reasonably well defined, he pointed out that a lot of what takes place in the middle is neither well defined or easy to understand.

Middleware and IoT platforms

Vodafone and the IIC make good points in relation to the present-day view of IoT. Much of the IoT industry’s thinking and received wisdom focuses on devices and connectivity. Asset tracking and remote monitoring fall into the class of relatively well-bounded silo applications.

Many of these are linked to closed-loop, business-process and industrial control applications. However, the concept of interoperability will re-shape these use-case ideas. As illustrated, an evolutionary path exists from current to future IoT solutions. This will involve higher degrees of collaboration across departmental and business boundaries. Connectivity will be commoditized and innovators will create value created by enhancing application logic through data sharing.

Source: Ken Figueredo

Beyond the silo applications phase, large organizations and small businesses operating in extended supply chains will benefit from interoperability. In such situations, economic and technology-management factors favor shared, horizontal IoT platforms. The next stage of evolution will involve interoperability across organizational boundaries, which corresponds to the world of federated IoT arrangements.

The platform capabilities, in between high-level industry knowhow and low-level connectivity, are a form of middleware. This layer in the ‘middle’ enables IoT applications — business logic — to interact with connected devices and sensors. Another way of looking at this middleware component is as a means of abstracting technical complexities in the IoT solution stack. Abstraction masks complexity and leaves application developers and device vendors to concentrate on what they do best. In other words, they no longer must custom build the full IoT stack for each and every use case.

Data hub use case

The idea behind separating sources and consumers of data via a middle layer is evident in an emerging IoT use case known as the data hub. This is where several organizations share data sourced from their IoT devices towards a common purpose. One such example is the City of Dortmund. Its local energy utility, DEW21, is creating and comanaging a new data hub company that enables the combination, analysis and linkage of non-personal data to create smart city solutions in new application areas. Examples of this approach include intelligent parking management, supply line leakage detection and air quality measurement.

Another example is ConVEX, the newly launched UK initiative to develop a connected vehicle data exchange. This data hub aims to facilitate the commercial exchange of related data types, such as data from connected and autonomous vehicles as well as data from transport networks and about the environment. The data hub concept will streamline the process for private sector businesses to develop connected and autonomous mobility services. It will simultaneously help cities to achieve their goals of safer, cleaner and affordable transportation.

Horizontal standardization

For the data hub concept to scale and support multiple use cases, there needs to be a set of common and interoperable capabilities. Take the example of device management which appears is almost every IoT solution. A common approach, such as LWM2M, makes it straightforward for vendors to supply devices with a generally accepted remote management capability.

That makes life easier for systems integrators and for solution developers. It’s an approach that eventually leads to scale economies. To support IoT solutions more broadly, there are many other common services, such as middleware services that enable security, registration, subscription and discovery capabilities.

As other organizations explore the middleware capabilities that enable more complex and cost efficient IoT solutions, horizontal-layer standardization will be of critical design importance. Will existing industry heavyweights succeed in creating large ecosystems through a handful of packaged solutions or de facto standardization? Or, will an open standards path as advocated by oneM2M and 3GPP prove to be more enduring by enabling a wider spectrum of IoT innovation?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: