This article is the fifth in a six-part series about monetizing IoT. Find the previous article here.
A few common approaches can be used to bring products to market to create diverse revenue streams within IoT. So far, this series about monetizing the IoT stack has defined various elements of the stack that can be monetized, introduced the three dimensions of monetization and described two of the three dimensions of the monetization framework for IoT: monetization models and monetization metrics. This article will address the third dimension of the monetization framework for IoT, known as product packaging.
Product packaging is the methodology for partitioning product functionality to bring different products to market with a variety of offerings, such as a single product leveraged by different SKUs to generate multiple revenue streams. This is another area where art meets science. Product management can be creative in meeting market needs and revenue goals with different approaches to portioning products’ functionality.
Product packaging examples
Product packaging can be a very broad topic, so I will identify a few, common approaches. For example, imagine a fictitious IoT utility equipment provider, Sensorytics. Sensorytics has a SaaS analytics platform that’s sold to various municipalities throughout the world.
The analytics platform ingests utility smart meter information, such as gas, water and electricity, and stores and uses that information to perform various analytics, reporting and alerting. In terms of relating this to the IoT stack, we’ll concentrate on the cloud aggregation and analysis part of the stack, but the principles apply to each level of the stack as well as to multiple elements of the stack combined into larger bundled offerings.
The analytics platform can provide value to the different types of utilities that a municipality offers. The product manager at Sensorytics initially decides to create three multiple offerings, one for each utility:
The initial step is to create a base analytics package for each of the three different types of utilities, which might see different value from the basic offering that provides a slightly different set of basic reports that are specific for that market. Initially, the product manager has three offerings:
- Analytics for Gas Management
- Analytics for Water Management
- Analytics for Electricity Management
Digging deeper into the gas market, the product manager identifies three different, additional specialized offerings to address three different personas who see different values based upon a different analysis and set of alerts provided. Upon release, the product manager has three additional offerings:
- The finance department manager of the utility company is interested in a Gas Analytics for Billing to focus on gas consumption for billing services.
- The service department is interested in Gas Analytics for Services to identify possible problem areas in the gas lines in order to proactively react to problem areas that have been identified.
- The marketing department is interested in Gas Analytics for Usage in order to optimize pricing based upon usage patterns.
The Product Manager decides that the offerings will be structured in such a way that the gas utility company must first buy the Analytics for Gas Management before they can buy the more specialized persona-centric offerings.
As customers begin to use the Analytics for Gas Management offering, they start to ask Sensorytics for increased functionality so that the product matches the users’ workflow. Building on the persona-based model described above for gas utilities, the service department decides that it desires richer functions and a wider footprint. This leads to additional products that are available to the service department; the three additional products, which match the workflow of that department, bring the total number of offerings to nine.
Gas Analytics for Services Dispatching. This is a reporting engine that is used by the services department for dispatching service personnel to various locations based upon where the analytics platform identifies problems areas.
Gas Analytics for Services Timecards. This is an extension to the basic analytics platform that now inputs service personnel timecards to aid in services billing.
Gas Analytics for Services Optimization. Used to aggregate information gathered from the utilities end-points, this is combined with services personnel location and expertise to design an optimum workflow personnel utilization plan to maximize coverage and minimize services costs.
As you can see, there are a variety of different offering structures that can be utilized to create multiple revenue streams from a single product or platform to match different market, persona and work-flow requirements. This can be further extended to create different bundles of individual offerings or to create offerings as a combination of product function and product metrics.
One word of caution is to keep models relatively simple. Avoid SKU explosion, which is characterized by a situation where 20% of the SKUs generate 80% of the revenue. Balancing the simplicity of the offering with revenue maximization is part of the art of product packaging.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In the summer of 2019, several cloud service providers experienced nagging bouts of unplanned downtime, impacting thousands of businesses. Google had an outage in June, which brought down several of its most popular services including Search, Nest, YouTube and Gmail, and was hit by another major outage in early July. Apple also experienced a widespread cloud outage in July, which affected the App Store, Apple Music and Apple TV. Cloudflare, Facebook and Twitter also had problems.
The macro conditions underlying these outages can often be boiled down to increased internet complexity or rushed-to-market software releases. While IoT was not involved in these recent problems, the implications of such unpredictability are significant for any cloud reliant IoT project, especially those with lives and safety depending on them.
This is because IoT and cloud computing are increasingly intertwined and symbiotic technologies. IoT devices generate huge amounts of data, with the cloud often serving as the central data collection and analysis repository. For example, consider a large multinational enterprise with IoT-connected thermometers across hundreds of factories, each one constantly generating data for analysis. These thermometers might be connected to other IoT devices and services, such as a factory manager’s remote, smartphone-based thermostat app. All of this requires superior speed and availability to work, making the recent spate of outages a major cause for concern.
Industrial IoT applications like this are just the tip of the iceberg. It’s one thing for factory thermometers to go down due to the cloud, but what happens when IoT is managing something even more critical, such as hospital systems and equipment?
The recent outages shouldn’t dissuade IoT projects from leveraging the cloud because, in many cases, the cloud offers higher levels of security, reliability and delivery speed than organizations can deliver themselves. But it does mean these organizations must be discerning and proactive about protecting themselves, especially if their IoT applications are mission critical.
Monitor the cloud yourself
Assurances from a cloud provider regarding availability and speed, round-trip time of packets traveling to and from your connected IoT devices, can give some peace of mind and a sense of the provider’s overall infrastructure health. However, this should be considered supplemental information only and cannot be relied upon exclusively for ensuring that IoT device connections are reliable and fast.
This type of direct to the cloud and back type of monitoring is not necessarily indicative of reality. Cloud service providers have partnerships with internet service providers (ISPs) and better network intelligence on how to route traffic. This means that, whenever possible, cloud monitoring will bypass the broader internet infrastructure that IoT device data must traverse, keeping packets in transit on their own networks and optimizing speed from point A to point B, and vice versa. This can result in a skewed, overly positive sense of IoT device connectivity and communication speed, because in the real world, networks and other external elements can get in the way.
Don’t track the cloud from only the cloud
While it’s critical for organizations with cloud-dependent IoT projects to do their own monitoring, they should never monitor the cloud from cloud-based infrastructure only. For the reasons outlined above, you might get a warped view of actual performance. Never monitor the cloud from the same cloud provider that’s handling your IoT project. If this cloud goes down, you’ll be blind to how your co-located IoT system is doing. You must make certain your monitoring vantage points are a mix of backbone, ISP, wireless and other node types.
Monitor IoT device availability
In an IoT world, devices essentially are the end users, so it is important to consistently monitor them and ensure they are reliable and interoperating with other IoT devices with exceptional speed. Since cloud service providers’ infrastructure consists of datacenters and other servers spread across the globe, a problem can occur anywhere and impact isolated segments of IoT devices.
That’s why it is critical to have as many eyes as possible, in all the key geographies where you have IoT devices running, as well as from the various network vantage points through which your IoT devices connect to the internet. This will put you in the best possible position to proactively detect IoT outages or slowdowns. Combining this with deep analytics will give you a head start in addressing the problem, whether it’s related to the cloud or not.
Have redundancy plans in place
If your IoT project supports a mission-critical process, you should consider having a multi-cloud strategy as a form of backup and protection. This might require a good amount of work, but it’s often worth it. You’ll need to make sure all the key phases of an IoT project, namely real-time data processing and storage, can be quickly ported over to another cloud in the event of primary cloud failure. This means testing failover strategies in advance to ensure cloud-to-cloud interactions are fast and reliable enough to support real-time data replication.
The cloud has many attributes that make it ideal for supporting IoT projects. Not surprisingly, growth in IoT data has led to cloud service provider growth and expansion, which supports more IoT data and projects. Together, the cloud and IoT represent a set of inextricably linked technologies of the future.
But organizations running IoT in the cloud must proceed with caution. If we’ve learned anything, it’s that even the strongest businesses in the cloud industry can — and inevitably will — go down. It’s up to you to take the steps needed to better prevent your IoT project from going down with them. In fact, this is something you can’t afford not to do.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The past couple of years have seen computing pushed out to the edge of the network at an ever-faster pace. The details vary as there are many different “edges” depending upon what problem is being addressed. But the overall trend is clear. By 2023, over 50% of new enterprise IT infrastructure deployed will be at the edge rather than corporate datacenters, up from less than 10% today, according to IDC.
High volume data streaming in from IoT devices, which often must be quickly processed, filtered and acted upon, is one driver of edge computing. But there are an increasing number of other application areas, such as telco network functions, which are best optimized by placing service provisioning closer to users and devices.
None of this is a repudiation of cloud computing, but it does illustrate how assumptions that computing was on a path to wholesale centralization were simplistic at best. In practice, enterprise computing is highly heterogeneous, and organizations are mostly pursuing hybrid cloud approaches.
Although distributing pushes compute out to where data, users and devices live has advantages, it also introduces challenges relative to centralized computing. These fall into three general categories: architecture and technology, ongoing operations and security.
Architecture and technology
The scale of some edge deployments is such that they can use similar software used in datacenters. For example, OpenStack is popular among telco’s to create private clouds at the edge just as it is for creating private clouds in a more traditional on-premises environment.
However, even if some of the software stack is common, edge installations must take several unique considerations into account. For example, you can’t just flip a switch and add more servers from a central pool if more capacity is needed at the edge.
It’s important to plan for the needed compute, as well as storage, networking and any other hardware, up-front. Upgrading hundreds or even thousands on edge sites is an expensive undertaking. At the same time, the cost of over-provisioning those hundreds or thousands of sites adds up quickly too. The lesson here is that you must design deliberately.
As noted earlier; the edge can look very different depending upon the application. An appropriate architecture for hundreds of clusters with tens of servers each differs significantly from one another for thousands of smaller endpoints, much less one that’s made up of millions of individual edge computing devices.
Operations within large distributed systems
There are also practical issues related to operating a large distributed system. All those edge clusters might be installed in locations that don’t have an IT staff and might even be in places with no permanent human presence at all.
We need to account for the fact that this is a distributed system connected by potentially unreliable and throughput-constrained networks. How do we want an edge cluster to behave if it loses its connection to the datacenter? If disconnected operation makes sense, the system needs to be designed with that in mind.
We also need to deal with failures within the edge cluster itself. Failures are a normal expected event at scale. We must provide redundancy while also considering cost tradeoffs. Is it cheaper to install some extra hardware so that repairs can be made mostly on a slower-paced scheduled basis? Or are we better running leaner and treating failures as an urgent event?
Site management operations, such as deployments and upgrades, must be handled remotely and be fast, reliable and automated. Good monitoring and logging are required for centralized management to work at all. Effective analytics can also help to predict failures and thereby head off some problems before they occur.
Edge computing security
In some ways, security is a subset of operations in the context of edge computing, but it’s important enough that it’s worth calling out separately. I’ve written previously about IoT device security specifically, but edge computing as a whole also has some specific security challenges.
The scale of many edge computing installations means that the automation mentioned above must apply to security as well. Automating patching and security scanning is a good practice. But using automated tooling that enforces security policies and minimizes potential vulnerabilities at distributed sites is essential.
The edge has other unique considerations. In general, datacenters have established robust physical security practices around controlling access to the hardware, properly disposing of assets, such as disk drives that may contain sensitive information, and generally providing a highly engineered and controlled environment.
This is often not the case with edge clusters. Branch office and other remote systems have had to take these factors into account for a long time. However, edge locations might not even have the level of controls that a bank branch or satellite company office does. And the scale can be much greater.
Plan and plan again
One could argue that there are relatively few challenges that we see in edge computing that we don’t also see — to greater or lesser degrees — elsewhere. But that’s the rub: We see requirements for failure resiliency and automation everywhere. But dealing with them at the edge, where both distribution and scale are so great, can be especially challenging. For example, once a fix is identified, rolling it out to every edge cluster is probably a much more significant task with more failure modes than in the case of centralized infrastructure.
The above example further highlights that edge architectures need to be carefully planned. This includes up-front design work that considers the practical realities of a highly distributed system that exists largely outside of controlled datacenter environments. But it also includes deliberate planning for the on-going operations on the entire system, including provisioning, failure recovering, upgrades and security.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
When it comes to connectivity, we’ve been conditioned to think mobile-first, where a service runs on a given data package or Wi-Fi connection. But we need to change this mindset. By 2023, the worldwide number of IoT-connected devices is predicted to increase to 43 billion, three times more than 2018 and well beyond mobile’s growth.
With this dramatic increase in IoT, there is an exponential hidden change in everything that relates to it, and we need to start thinking beyond just mobile. The question is – are we ready to support this?
Going back to the drawing board
The average person will have more than 13 network connected devices by 2022, which is why things will have to change. These different IoT devices belonging to one user or household will come with different types of plans, connectivity options, monetization models, operating systems and onboarding experiences. A dumb device approach — where something connects with no real care for a consistent practice — will cause a wide range of problems.
When IoT has its breakthrough, connectivity providers need to be ready with a place in the value chain beyond just connectivity and back-end operations. This means onboarding, security, and quality assurance, tailoring IoT services to different verticals and target audiences. When we turn IoT sensor signals in a physical environment into insights, we achieve the next level in the connectivity revolution. We effectively digitize the physical world. This is where opportunity awaits, and data intelligence will be critical for its success.
What are connectivity players overlooking at present?
The IoT revolution will not only be about big consumer electronics manufacturers but smaller, niche players in the ecosystem. Connectivity providers can’t be in a position where they are only working with a handful of device manufacturers and cutting themselves out of revenue opportunities. They need to be a critical enabler, not a holdup.
So, what’s the best approach for connectivity players when it comes to devices? Ensuring they define the customer experience during the onboarding process and the ongoing service itself. This is especially important as device manufacturers have more influence than ever over how profiles are provisioned when it comes to IoT.
The most important takeaway here is this: the entire history of our industry has been viewed through a mobile device keyhole, where a service is provided with one price plan or data package. This is about to change dramatically. The connectivity providers who start planning now for the next era in our industry will have a competitive edge.
While many business verticals are looking to 5G technology to boost bottom lines and improve customer interactions, perhaps none is doing so more than the manufacturing sector — notably IoT — driven largely by the promise of the digital industrial revolution known as Industry 4.0. As such, industrial enterprises are increasingly considering deployment of private 5G networks.
A key feature of private 5G networks is the release of unlicensed spectrum, which enables companies to operate a private network without going through a mobile operator. This flexibility is driving mobile carriers to develop unique strategies for attracting industrial enterprise clients. Some operators are leasing their own spectrum to support private enterprise networks, while others are developing private wireless networks that are then sold to enterprise customers.
Regardless of how they’re built, the fact is that private networks are growing in popularity. Last year, private LTE and 5G networks accounted for some $2.5 billion in spending. With a projected CAGR of about 30%, the market is expected to surpass $5 billion by the end of 2021, according to market research firm ReportsnReports.
A number of factors are fueling this growth, including digital transformation initiatives and rising demand for highly reliable, secure wireless communications. Moreover, IoT, which underpins Industry 4.0, is driving new connectivity requirements for productivity, efficiency and quality of service (QoS).
5G’s technological advances are also propelling the growth of private networks. 5G provides enhanced broadband and throughput, ultra-reliable low latency and massive capacity for IoT. It also supports network slicing capabilities and low-power wide-area networks to support a full and cost-effective ecosystem necessary for effective industrial private networks.
Additionally, 5G enables easy upgrades, access and control via SIM card based credentials, and it enables simple interconnectivity between different technologies. 5G new radio will deliver guaranteed real-time response that’s critical for things like closed-loop motion control operations and remote robotics management. What’s more, 5G enables immense densification and IoT connectivity support, with guaranteed QoS for industries that have hundreds of thousands of sensors in relatively small areas.
Why go private?
Industrial enterprises face a number of challenges that private 5G networks could help address. Many rely on wired connectivity in their production facilities. However, such networks suffer from poor flexibility, scale and remote connectivity. In addition, current manufacturing networks depend on Wi-Fi, which isn’t reliable for mission-critical, always-on connectivity.
Equally important is the fact that carrier roadmaps and network timelines are not aligned to the needs of Industry 4.0. Instead, communications service providers are mainly focused on consumer services and have limited knowledge of the connectivity needs of advanced industrial networks. Consideration must also be given to the complexity of networks and the associated costs to companies that want to deploy them.
In contrast to wired networks, private 5G networks are managed locally and have dedicated equipment to provide local coverage that’s optimized for local services. The networks are optimized and tailored for industrial applications, especially those with stringent QoS and reliability demands.
Additionally, private 5G networks are dedicated and independent, ensuring data privacy and improved security. Such networks are driven by CIOs who control the technologies and digital transformation roadmap. Overall, private 5G networks provide control and flexibility by leveraging network slicing, vast bandwidth, light costs and low latency.
Last but not least, private 5G networks ensure a rapid return on investment — generally less than three years — and improved time to market for new products. Open architecture and cloud-based deployment serve to future-proof the enterprise platform, while also promoting product revenue growth.
How can organizations use private 5G networks?
Although private 5G networks are still in their infancy, a number of key use cases have emerged for manufacturing, including automated operations, track and trace, remote robotic manufacturing, predictive maintenance, remote product support and the integration of supply chains and inventory management.
Industrial facilities that are particularly well suited to take advantage of private 5G networks include:
- Shipping ports
- Transportation hubs
- Distribution warehouses
- Upstream and downstream oil and gas operations, as well as oil and gas transport
- Surface and underground mining operations
- Process manufacturing, hybrid manufacturing and discrete manufacturing plants
- Hospitals and labs
- Power plants
- Water treatment plants
A number of live trials of private 5G networks are already underway. In one trial, KPN and Shell have partnered to create a 5G network at the Port of Rotterdam for the preventative maintenance of almost 10,000 miles of pipelines. By combining ultra-high-definition cameras, the 5G network and machine-learning algorithms, maintenance of the pipelines is better predicted, and engineers receive needed information about the system on tablets that support augmented reality.
Siemens and Qualcomm recently announced that they have implemented what they claim is the first private 5G standalone network in a real industrial environment. BASF, a producer of chemicals, is also developing its own ultra-fast 5G network at its primary plant in Ludwigshafen, Germany. Volkswagen and Daimler also plan to independently create private 5G networks.
Challenges to overcome
Deploying a private 5G network isn’t without its challenges regardless of who owns it. Companies must consider the right business model for them. They can rely on a systems integrator to design and deploy the network or partner with a carrier that can outsource the solution to them. Either way, business case, network design, integration and deployment and operations must be defined in detail. The solution needs to reflect the enterprise’s IoT strategy, meet its objectives and leverage existing technologies. This will lead to improved service and lower ownership costs in the future.
There is no doubt that the next couple of years will be exciting ones for industrial enterprises looking to 5G to improve processes, increase productivity and enhance customer interactions. Private 5G networks will play an increasing role in making the promise of the Fourth Industrial Revolution a reality.
Electronic systems and equipment OEMs have historically pushed for greater functional integration. This means putting more electronic power on increasingly smaller chips, as well as making sub-assemblies even smaller. All this goes under the banner of increasing performance and decreasing cost. This is truer today for IoT devices than ever before.
Wire bonding plays a big part in these evolving technology trends
Wire bonding, which is the process of connecting a chip to its associated sub-assembly or printed circuit board (PCB), represents a key portion of some of IoT devices’ overall electronics operation. Traditionally, a single wire bonding operation has been used to connect a chip or die to the PCB or substrate.
However, we’re now seeing customers earnestly investigating considerably greater functionality for their next generation IoT products. That means deploying multi-tier wire bonding applications that utilize the same substrate and die real estate.
As the name implies, multi-tier wire bonding ranges from two to four or more sets of wire bonds connecting a highly complex bare die or chip to the PCB or substrate. Single tier wire bonding has reached a high level of efficiency and reliability on the PCB assembly and manufacturing floor.
But there are challenges when it comes to multi-tier wire bonding. For the IoT device OEM taking the multi-tier wire bonding route, it’s best to rely on EMS providers that have solid footing in this emerging technology. That means the OEM must know that multi-tier wire bonding offers them a solution when the number of I/O’s are far beyond the traditional I/O’s that are used in the single wire bonding application.
In multi-tier wire bonding, the different rows of wires are isolated by maintaining a different loop height for each row. This creates a vertical gap between the rows of wire from first row to the second, third and fourth row
Multi-tier wire bonding increases the capacity and the capability of a bare die. If wire bonding is double stacked, I/O capacity is doubled by adding the second set of wire bond pads. If you go on to a third-tier wire bonding, you’re increasing the capacity of I/O’s by three times. Meanwhile, the bare die remains the same. Essentially, you are extracting more functionality out of the same die.
Multi-tier wire bonding challenges
However, there are a few challenges with multi-tier wire bonding. With multi-tier wire bonding, wire bonder precision is critical. You must ensure that the first row of wire bonding is the lowest in height; the second higher than the first; the third higher than the second; and the fourth must be the highest.
A third and quad tier wire bonding machine demands a well-trained operator, top notch precision and process control, exact calculations, and computational knowledge of wire size and bonder restrictions — including x, y and z directional restrictions — among other key requirements. Also, wire looping must be correctly calculated. If it’s not, wiring is prone to sag. The result is the creation of shorts with other rows of wires.
For example, if a third tier of wire bonding is incorrectly performed and its sagging, it creates the possibility of a short with the second tier of wire bonding. Re-working this problem is difficult because wire bonding can only be re-done two or three times. That’s because the pad is worn down after those two or three re-working attempts.
Another major challenge involved with multi-tier wire bonding is wire pull testing (WPT). This PCB microelectronics assembly step focuses on wire bond strength and quality. It involves applying upward force under the wire to be tested, and WPT is applied on every tier. If two to three tiers are not tested properly at the right time, it becomes a challenge to test without damaging certain wire tiers. Therefore, expertise and savvy technicians should know top requirements to assure effective WPT. Each tier of wires must be pull tested before the next tier is bonded.
The future of multi-tier wire bonding
Multi-tier wire bonding is one PCB microelectronics technology that will gain greater interest for newer, smaller OEM products across multiple industries. For example, greater functionality at lower cost is much sought after in medical electronics, IoT devices, wearables and other portable gear.
The savvy IoT device OEM and its product designers will take into account several considerations when seeking guidance for multi-tier wire bonding, which includes optimal design and size of the pad. Pad pitch becomes important, as does the loop height and length of the wires. In addition, the staggering pitch is important, which is basically the pitch from one row to the second to the third, and so on.
If you’ve seen the film Casablanca, you’ll remember one of the final scenes where Captain Louis Renault turns to Rick Blaine and says, “round up the usual suspects.” As we head into 2020, the mystery of which industries will show the greatest penetration of IoT usage and value begins to unfold. However, this question has popped up annually for the last decade, and with several analyst firms and magazines publishing overlapping lists, it’s easy to round up the usual suspects: manufacturing, healthcare, and transportation and logistics. Or is it smart cities, retail, and media and entertainment? How do you actually decide?
Defining the usual suspects
Let’s examine what rounding up the usual suspects means. To determine these top industries, we want to look at the characteristics or traits that warrant something being a great application for IoT and the need for data at the edge. Based on my previous experience as an embedded systems engineer, I believe the answer boils down to business and technical virtual teams asking themselves the following questions and generating clear, positive answers to them:
- Will the investment in IoT quantitatively reduce my costs or enable me to generate more revenue?
- Can I model out my edge in a way that enables me to identify how applications of IoT with associated intelligence at this edge will generate positive outcomes for the answer to question one?
- Can I apply the appropriate process orchestration, rules and exception handling, and supporting data processing and analytics to implement or improve automation, machine-to-machine interface and decision support in real-time to what I’ve modeled in the answer to question two?
- Can I implement the infrastructure necessary, such as 5G networks, edge data management, bolt-on intelligence to brown-field environments and implementation of select machine learning inference models out at the edge to support a portfolio of projects addressing the answer to question three?
Identifying the usual suspects
If you can create positive workable answers to all four of these questions, you are ready to line up your suspects for positive ID by the eyewitness. In other words, look for the right IoT projects in the right industries. I would argue that you should look for the following:
- Real-time edge decisions and actions. These decisions can make or break your business. For example, prescription drug packaging, product defects on an assembly line, package routing on a conveyor belt, and many others.
- Remote field environments. This is where bandwidth for connection to the cloud or other centralized IT resources is at a premium, or those connections experience periodic disconnection either unexpectedly or by design that are core to your business, such as airline flight services, transoceanic cargo shipping, certain defense and intelligence cases.
- Local data processing. Additional processing of IoT devices and the data they generate by software that leverages resident software containing artificial intelligence, which can then be applied in an evolutionary, iterative way across multiple projects. For example, condition monitoring leading to preventative maintenance, then turning into predictive maintenance that allows for the same data and underlying infrastructure to support digital twinning.
With this methodology, you can make your own predictions as to which industries have the greatest application of IoT devices and data at the edge. For me, the manufacturing industry tops the list for 2020.
However, it’s not all sub-verticals in manufacturing. In fact, I would say Heavy Industry and High-Tech manufacturing. To get even more granular, I’d go with manufacturing that has an Industry 4.0 roadmap towards digital twinning. Similarly, I would say transportation and logistics would be on my list, with a focus on streamlining and accelerating accurate package delivery. With these characteristics in mind, you can then revisit the usual suspects in a more granular and measured fashion.
The fifth edition of IoT Solutions World Congress took place in Barcelona a few weeks ago. During it, two IoT industry heavyweights made some noteworthy comments on the state of the IoT market.
There are only three IoT use cases, according to Vodafone’s head of IoT development, Phil Skipper. One of these is asset tracking. A second is remote monitoring. The third, which depends on 5G capabilities in Release 16 of the 3GPP standard, is ultra-reliable industrial control.
The Industrial Internet Consortium’s (IIC) CTO, Stephen Mellor, spoke about technology hype and the value of sharing end-user stories. Ideally, these should go beyond ‘proof of concept’ profiles to show how technology works and proves its value.
Mellor was critical of the 10 years that it took the telecoms industry to define a framework for hardware, transport and application layers. While concluding that the “high level industry stuff and low-level connectivity” are reasonably well defined, he pointed out that a lot of what takes place in the middle is neither well defined or easy to understand.
Middleware and IoT platforms
Vodafone and the IIC make good points in relation to the present-day view of IoT. Much of the IoT industry’s thinking and received wisdom focuses on devices and connectivity. Asset tracking and remote monitoring fall into the class of relatively well-bounded silo applications.
Many of these are linked to closed-loop, business-process and industrial control applications. However, the concept of interoperability will re-shape these use-case ideas. As illustrated, an evolutionary path exists from current to future IoT solutions. This will involve higher degrees of collaboration across departmental and business boundaries. Connectivity will be commoditized and innovators will create value created by enhancing application logic through data sharing.
Beyond the silo applications phase, large organizations and small businesses operating in extended supply chains will benefit from interoperability. In such situations, economic and technology-management factors favor shared, horizontal IoT platforms. The next stage of evolution will involve interoperability across organizational boundaries, which corresponds to the world of federated IoT arrangements.
The platform capabilities, in between high-level industry knowhow and low-level connectivity, are a form of middleware. This layer in the ‘middle’ enables IoT applications — business logic — to interact with connected devices and sensors. Another way of looking at this middleware component is as a means of abstracting technical complexities in the IoT solution stack. Abstraction masks complexity and leaves application developers and device vendors to concentrate on what they do best. In other words, they no longer must custom build the full IoT stack for each and every use case.
Data hub use case
The idea behind separating sources and consumers of data via a middle layer is evident in an emerging IoT use case known as the data hub. This is where several organizations share data sourced from their IoT devices towards a common purpose. One such example is the City of Dortmund. Its local energy utility, DEW21, is creating and comanaging a new data hub company that enables the combination, analysis and linkage of non-personal data to create smart city solutions in new application areas. Examples of this approach include intelligent parking management, supply line leakage detection and air quality measurement.
Another example is ConVEX, the newly launched UK initiative to develop a connected vehicle data exchange. This data hub aims to facilitate the commercial exchange of related data types, such as data from connected and autonomous vehicles as well as data from transport networks and about the environment. The data hub concept will streamline the process for private sector businesses to develop connected and autonomous mobility services. It will simultaneously help cities to achieve their goals of safer, cleaner and affordable transportation.
For the data hub concept to scale and support multiple use cases, there needs to be a set of common and interoperable capabilities. Take the example of device management which appears is almost every IoT solution. A common approach, such as LWM2M, makes it straightforward for vendors to supply devices with a generally accepted remote management capability.
That makes life easier for systems integrators and for solution developers. It’s an approach that eventually leads to scale economies. To support IoT solutions more broadly, there are many other common services, such as middleware services that enable security, registration, subscription and discovery capabilities.
As other organizations explore the middleware capabilities that enable more complex and cost efficient IoT solutions, horizontal-layer standardization will be of critical design importance. Will existing industry heavyweights succeed in creating large ecosystems through a handful of packaged solutions or de facto standardization? Or, will an open standards path as advocated by oneM2M and 3GPP prove to be more enduring by enabling a wider spectrum of IoT innovation?
The holiday season is a time of giving. Tetailers are gearing up for a spike in online shopping, and charities are counting on end-of-year giving through their websites. At the same time, their employees are booking travel and buying the hottest toys and gadgets right from their desks.
However, this is also when cybercriminals actively look to exploit all of this increased online activity and generosity by identifying and picking off the most vulnerable and least prepared organizations. So, in addition to the spirit of good cheer that this time of year fosters, companies need to exercise a spirit of caution and proactive cybersecurity to keep their data safe.
Recent cyberattack tactics
An essential part of any security strategy is understanding your enemy’s latest tactics. For example, because most malware is delivered via email, many organizations have been aggressively addressing phishing attacks through end user training and upgrading their secure email gateway tools.
Of course, counterintelligence works in both directions. Which may be why threat researchers are seeing cybercriminals expand their ability to deliver malicious malware through other means, such as targeting publicly facing edge services including web infrastructure and network communications protocols or actively bypassing ad blocker tools, according to one recent report. But regardless of their motivation, the fact is that organizations need to be aware that cybercriminals are actively leveraging attack vectors that don’t rely on traditional phishing tactics.
During the past quarter, cybercriminals actively exploited vulnerabilities on edge services that enable remote code execution. This enables criminals to deliver malware while bypassing any increased protections elsewhere, such as those preventing phishing exploits. Although the tactic of targeting vulnerable edge devices is not new, changing tactics on exploiting systems where defenders may not be watching as closely can be a successful way to catch organizations off guard and increase chances for success. The lesson is clear: just because you need to shore up your defenses in one area doesn’t mean you can let your guard down for a moment anywhere else.
Keeping your visibility tuned to all potential attack vectors is always challenging; it can be especially problematic during the busy online shopping season where online services experience significantly increased activity.
Seven holiday security tips
Organizations can do seven things to defend against holiday online threats, though they remain valuable the rest of the year as well.
- Teach employees to recognize phishing. Even with all the robust training and education about phishing happening, it still remains the number one venue for delivering malware. Employees must be trained to never open an email or click on a link sent from a stranger. Even emails from known persons must be subject to scrutiny. Some cybercriminals are using a new technique, whereby they hijack an active email thread and then insert a malicious email while masquerading as one of the thread participants. Success rates are very high, so users need to understand that if the message seems out of character in any way, check with the sender before proceeding.
- Use good cyberhygiene. Protect the organization from malware and viruses by installing well-known and well-reviewed antimalware software. Keep it updated and run it regularly. Ensure passwords are strong and changed often. And use security reports to compare current traffic against known threats.
- Regularly update devices. Cybercriminals will go after low-hanging fruit wherever they can find it, which is why they use well-known vulnerabilities that are not patched. In Q3 of 2019, vulnerabilities ten or more years old were targeted just as frequently as those uncovered in 2018 and 2019 — and the same was true for every year in between. Which is why every organization needs to make it a policy to download and run all updates for devices — including IoT devices — and their apps as soon as they become available.
- Use a VPN. For companies with a BYOD policy, consider using a VPN service to protect transactions. Unencrypted data, even if it is just moving a few feet from a device to a local wireless router, can be easily intercepted or compromised.
- Only download legitimate apps. A compromised app can intercept an employee’s financial data or other personal or company information. Train employees to only download apps from legitimate application sites and never allow installations from unknown sources. Organizations may want to create a BYOD policy mandating the use of a security tool from a legitimate app store that scans devices for signs of compromise. Likewise, unknown and unvalidated SaaS applications can introduce real risk. Deploying a CASB solution allows you to establish and maintain control over unknown SaaS applications.
- Secure mobile devices. Malware on personal devices represents 14% of all malware organizations need to deal with, according to Fortinet. BYOD policies can complicate security strategies, so make sure appropriate controls are in place to protect mobile devices – particularly at their wireless access points. This requires wireless access points, mobile security services and MDM solutions to be fully integrated into next-generation firewalls.
- Implement segmentation. Segment IoT devices into secured network zones using customized policies. Segments can then be linked together across the network, with deep inspection, monitoring and protections being applied at critical junctions — especially at access points, cross-segment network traffic locations and even across multi-cloud environments.
The holiday season is anticipated by holiday givers and cybercriminals alike. Bad actors will use all kinds of clever ploys to prey on the goodwill and generosity of this time of year. Organizations can counter these darker angels of human nature by shining the light of strong cybersecurity into every corner of the network.
Over the past few centuries, American manufacturing has experienced several sweeping changes, from the dawn of modern bulk material handling to Henry Ford’s game-changing assembly line, to today’s robotics and the rise of Industry 4.0. Yet we have only begun to scratch the surface of how a manufacturing ecosystem can benefit from incorporating the network of connected devices that create IoT.
The number of IoT devices is projected to amount to 75.44 billion worldwide by 2025, which is a fivefold increase. Many of these will be in the manufacturing industry, which is where the foundation for IIoT was laid more than 20 years ago. Many of today’s most advanced IT asset management initiatives still use the manufacturing execution system infrastructure that has helped business owners manage shop floor controls for decades.
Because these existing systems have made it easier for manufacturers to implement IoT compared with other industries, they’ve jumped headlong into IoT investments. U.S. manufacturing companies could account for as much as 15% of the country’s total IoT purchases through 2020, and those that have already invested have been reaping the business benefits of real-time data.
Using real-time data for manufacturing asset management
Beyond simple data collection improvements, IoT transforms collected data into usable, relevant and actionable insights that support more informed business decisions, making asset management programs one of the biggest opportunities for IIoT. IBM found that pairing IoT with asset management programs can help reduce costs in production, drive revenue growth and speed up time to market across three primary IIoT use cases:
- Operational tasks: IoT’s constant data feedback will create new levels of efficiency in manufacturing environments. By combining IT asset management (ITAM), performance monitoring and planning strategies, organizations can not only optimize existing opportunities, but also uncover brand-new ones that would have been impossible to identify otherwise.
- Production and maintenance tasks: When combined with AI and machine learning, IoT can monitor production rates, track inventory levels and even perform predictive maintenance functions. For example, a motor outfitted with smart sensors wirelessly sends a continual stream of real-time operational data to condition-based asset monitoring software. This helps plant managers see which machines need maintenance long before anything fails, which increases uptime.
- Field service tasks: IoT creates competitive advantages for product-oriented efforts that fall outside the traditional manufacturing environment. Remote employees can generate immediate and accurate customer pricing quotes based on global inventory levels and current raw material costs, in addition to uncovering real-time market trends and behaviors; thus accelerating go-to-market strategies.
Of course, to achieve this triple whammy of benefits, all these smart, connected devices must be supported by additional upgraded business intelligence and ITAM capabilities.
Solving the challenges of IoT in manufacturing asset management
Because implementing IoT will lead to an explosion of endpoints, manufacturers must look to sophisticated, centralized management strategies to manage all of them and ensure the safety of business-critical processes and data.
Any device with connectivity — by definition — is a mobile device and must be managed and treated as an endpoint, which ultimately changes the principles of manufacturing IT asset management. Both the quantity and the types of devices will add complexity to how an organization manages all the endpoints on its network, leading to the potential for security, spending and inventory gaps.
Additionally, this more complex digital ecosystem will require manufacturers to implement ways to maintain control and visibility over their inventory, as well as ensure all IoT devices remain up and running to prevent downtime. When looking for a platform that can manage IoT devices, as well as every other technology asset within a manufacturing environment, manufacturers should bear in mind these three tips:
- Simplify technology management across the entire manufacturing ecosystem by managing all assets, such as sensors, mobile devices, smart machines and cloud services, in one single platform.
- Incorporating IoT will often mean manufacturers can do more with fewer technologies and save on costs. With a way to see all technology expenses in one place, manufacturers can find ways to streamline assets, manage contracts and vendors, and eliminate redundancies.
- The beauty of IoT is the sheer amount of data a connected environment creates. A platform that can translate all this data into actionable reporting will help manufacturers identify trends, areas of overspend, performance gaps and opportunities for optimization, which ensures valuable insights aren’t lost within those mountains of data.
Managing IoT endpoints will be a critical part of the new manufacturing ecosystem, helping manufacturers reach new heights of efficiency, productivity, innovation and profitability.