It’s a regular weekly meeting of the supply chain team, and the contrast in data quality between manufacturing and everyone else is glaring: The production planning team shows detailed data on current inventory and cross references demand-planning numbers to identify any risks to the company’s ability to meet sales forecasts. The logistics team references its own software to show the location and contents of every container, pallet and box of the company’s finished products. The head of manufacturing, meanwhile, rolls in a whiteboard, grabs a handful of markers and jots down production estimates for the next week off a yellow notepad.
That’s a snapshot of the ability of many corporations to manage their supply chains from raw material sourcing through delivery to the end customer. The planning process is awash in information covering all aspects of the business except for one critical area: manufacturing.
Without real-time manufacturing production and product quality information, supply chain risk management is too dependent on guesswork. Understanding yield and available production capacity in real time allows the manufacturer to move beyond forecasts and instead make fact-based decisions on lead times, capacity and strategic risk management. Unfortunately, for most supply chain executives, this data remains something of a black box.
Forecasted output is based off capacity numbers that most plants set based on past performance. Even though these may not be the most reliable, we at least know a capacity forecast number to plan around. But predicting how much finished product we’ll actually have at the end of the month involves a good deal of guesswork. Too often, we also have little visibility into the bottlenecks that are throttling output and struggle to quickly determine the cause of downtime.
The lack of clarity isn’t necessarily due to a lack of data. Numerous analyst reports have found that manufacturers produce more data than any other industry (see, for example, “Engineering the 21st Century Digital Factory“). The widespread incorporation of sensors onto manufacturing equipment has flooded many companies with machine data. The problem is that relatively few have been able to turn that data into real-time visibility of their production process.
Supply chain problems: Three scenarios
Consider these scenarios:
Scenario 1: Capacity planning, capacity utilization
The company’s latest gadget, the XPS, is a hit with reviewers and customers. The sales team is swinging for the fences, and online merchants and big-box chains have put in big orders. Unfortunately, the only idea the company has about its ability to deliver on this demand is based on a static plant capacity number stored in the production planning modules of its ERP systems.
The company knows from past experience that actual production can vary dramatically from expectations, but it still can’t get the right data from production to improve forecasts and doesn’t know how to analyze what data it does have. If it’s wrong, it won’t be able to replace lost sales on its previous gadget model, earnings will take a hit and things will get ugly at the quarterly board meeting.
The questions: How many units can we commit to producing next quarter? How much will we produce under current conditions? How much capacity do we have to increase production? What resources would it take to further increase production? What would be the cost of increasing production?
The answers to these questions — and the keys to unlocking hidden capacity — lie in a thorough analysis of machine data. A manufacturer can compare production across similar lines, identify those with lagging output, and find and fix the root cause. Analysis can find production bottlenecks and unused resources and identify the most cost-effective way to increase capacity (e.g., add a second shift vs. replace a costly piece of equipment that is creating a production bottleneck). With these answers in hand, manufacturing leaders have better decision-making ability.
Scenario 2: Production variability
The company had projected that each of its five new XPS production lines would make 15,000 units a day, but they’ve been averaging only 12,000 units, with three of the lines unable to exceed 11,000 units.
The questions: Why are these three seemingly identical lines performing so far below expectations? How soon can we diagnose the problems and bring them up to their full targets? Can we squeeze even more out of our top two lines?
The fundamental question here is: What is causing the variation? — and the answer lies in the data. Is it because one line is experiencing more downtime? If so, which variables are associated with the higher downtime? Or is scrap rate higher? If so, a root cause analysis can identify where the defects are being introduced, and what variables correlate with higher defect rates.
Scenario 3: Track and trace, root cause analysis
The company has had to recall 25,000 defective units. The company knows which part has been failing and knows they were all made at the same plant during a two-week timespan. What is causing the defect?
With the right data available, the manufacturer can drill down until the cause is found. A crucial part of the data modeling is being able to associate specific products or batches with the conditions of each machine at the time the batch passed through that machine or process. Did all the defective products pass through a specific line or machine? How were the settings and conditions on that machine different from the other machines not associated with the defect? Or with the conditions on the same machine before or after the two-week period when the defective products were made?
Having the right data model in place enables the manufacturer to not only diagnose the root cause of the problem, but also to narrow the scope of the recall to only products directly impacted by the processes in question.
While manufacturers can separately optimize demand planning, sourcing, manufacturing or distribution, they won’t achieve transformational benefits unless all parts of the organization can see, understand and use the data from all parts of the process. By taking an integrated view of customer demand, sourcing, production and distribution, manufacturers can achieve savings that really matter.
Having real-time visibility into production data is the first step into creating a more agile supply chain organization. By better understanding production dependencies from real-time accurate production data coming directly from the machines, supply chain teams can move beyond production planning based on static inaccurate capacity numbers and reactive problem solving, and instead make fact-based decisions that transform the way their organizations operate.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The rise of the internet of things is clear. According to a study by Red Hat, 21% of organizations have already incorporated IoT projects into their business, 28% plan to do so in the next year and 70% over the next five years.
Given that many companies want to include IoT into their existing business or offerings, the search for in-house talent is becoming grim. Businesses, therefore, opt to outsource part of all of their IoT needs in order to attain the necessary development skills.
Looking at the IoT development stack, various parts of the process can be outsourced. Some companies choose to outsource certain elements of an IoT project because they have some developers in-house but lack specific skills related to the development. This route implies companies can scale up and down quickly.
Before deciding whether or not to employ an outsourcing provider, companies have to look at the needs of an IoT product.
IoT development requirements
The product begins with R&D. Companies must attain the technical skills to build the entire product, employing qualified developers and testers. Creating an IoT product requires more than just a developer. Teams need skills such as microprocessor programming, chip design experts and so forth.
Someone needs to manage the database of information, as well as analytics. What’s more of an issue to businesses is not just someone that can manage the data, but who can also manage security concerns. Data breaches are far too common, so the need for a security specialist is crucial. Someone on the team needs to handle application development platforms and also system integrations.
As there are a couple requirements to develop an IoT-based system, companies often choose to outsource the entire product. Another popular option is to hire an extended team to fill in the technical skills missing internally.
Why outsource IoT
As with any product, there is risk when developing an IoT product. A company could scale up internally, build the product and something could go wrong. Maybe certain functionality does not work, or there’s no real market need. Outsourcing is a way to mitigate risk.
Keeping the cost advantage in mind, outsourcing allows companies to hire many more engineers and specialists for a fraction of the cost. But, more importantly in the competitive world of IoT, companies can get to market faster by outsourcing. If, per Red Hat, so many businesses want to add IoT functionality, there will be more competition in this market, so timeliness is key. Outsourcing also offers the advantage of more flexibility and more creativity with a skilled team working on the project.
Which parts of an IoT product to outsource
Many organizations choose to outsource IoT from beginning to end. They either do not have or do not want to find the talent to complete the project. Others decide to outsource bits and pieces and use an internal team occasionally. Here’s a look at what companies can outsource during IoT application development:
Research and development
Outsourcing R&D is an often overlooked benefit. However, having third-party research and validating an idea is an excellent way to ensure the product or service is fit for market placement. Internally, companies can become tied to their own ideas and find ways to approve them. Using an outsider, however, can eliminate bias while capitalizing on the expertise that may not live in-house. Companies may also choose to outsource R&D so that team members can focus on core business functions — they simply may not have the time to devote to this endeavor.
There are IoT developers who can help create the actual product. These developers often have a combination of skills including AI, mobile, user experience/user interface (UX/UI), IT networking, hardware interfacing and more. Since IoT developers are hard to find (and expensive to hire full time), businesses outsource to organizations that have this talent on staff.
IoT applications are all about the user experience. UX drives sales, increases conversions, strengthens the business-consumer relationship and keeps the focus on your customer base. If the product is difficult to use, then it will be even harder to sell or maintain. IoT UX and UI is relatively new, but increasing in importance as consumers adopt IoT technology for everyday use.
An in-house cloud service is impractical. Since there are many renowned services available, companies often offload this part of the IoT project. The costs associated with maintaining and upgrading a cloud service are far too high to handle in-house, especially when taking security concerns into consideration.
Security and data
According to Gemalto’s “The State of IoT Security” report, only 33% of respondents believed they had complete control over the data that their IoT products collect. As security concerns tighten and data breach consequences become more severe, businesses can opt to outsource security entirely rather than finding someone in-house to handle it.
Final recommendations for IoT outsourcing
Whether outsourcing all or just a part of an IoT project, it’s important to remember that not all outsourcing firms are created equal. While outsourcing cuts costs, remember to go with a provider that adds value — and not just one that offers the lowest price. Most importantly, you should find someone with experience and numerous success stories capable of developing and maintaining an IoT application.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Open source already plays a big role in the home network. Most, if not all, home routers and networked devices, including IoT devices, in the home run some form of Linux. Many home routers are based on OpenWRT, an open source operating system based on Linux, but specifically designed for a router. And, Comcast has launched RDK-B as the open source operating system that powers all of its home routers. On the IoT front, Mozilla has open sourced an IoT gateway. So, why might the industry benefit from these projects? Here are a few reasons:
- The reliance on open source versus proprietary firmware allows companies to focus on new and interesting innovations without having to continually reinvent core functionality.
- Open source can lead to more router manufacturers’ opening their router firmware to running third-party applications, which is great for app developers and consumers.
- Open source can lead to more secure and manageable network infrastructure.
- Open source can lead to better interoperability between various devices.
With open source, developers can use API access and underlying code to create valuable applications that use the information and control available on the home router. Take the guest experience as an example: An application could link guest access to your Facebook friends, or limit guest access to when you are home, or limit guest access to certain devices on your network (e.g., your friend should be able to print to your printer but not see all your tax returns on your desktop computer.) Now, consider the video streaming experience: With full home network context, an application could detect when your streaming devices are buffering and adjust the network to make your video watching experience better.
Even though it is theoretically possible for all of these new and innovative applications to be provided by a single company in a monolithic firmware provided by the same company that manufactured your router, it is very unlikely. Very few companies build both hardware and software well. We know from the Android and iOS app stores that it is not possible for a single company to match the level of innovation fostered by an app ecosystem. The OpenWRT community is working on systems to allow third-party applications to securely and safely run on the router. By using this and fostering an app ecosystem, router manufacturers can drive preference for their hardware while delivering more value to their customers through third-party applications.
The home router is uniquely positioned to help consumers adopt and optimize their explosion of connected devices. Today, broadband homes have an average of nine devices on the Wi-Fi network. By 2020, this will be more than 20, including video and music streaming devices, personal assistants like Alexa and Google Home, security cameras, thermostats, smoke detectors, door locks, power plugs and adapters, lights and appliances. (Over 4,400 device makers exhibited at the CES show this year.) As the number of these devices connecting to the home Wi-Fi network increases, the complexity of managing and securing that network increases exponentially: A single device with a poor Wi-Fi signal could impact the performance of other devices on your network. Likewise, a single device downloading a software update from the internet could wreck your movie night or Monday night football. It’s time we grow an ecosystem of home router applications that talk to other elements in the network and make good decisions.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
New data tells us that stakeholders in the internet of things are making progress when it comes to securing IoT infrastructure and the data it produces, but there is still a long way to go. In order to assess the current state of affairs, Gemalto recently surveyed 950 IT and business decision-makers with awareness of IoT in their organization. Some of the results were encouraging, while others were less so.
Security is clearly an emerging priority. Companies are now devoting 13% of their IoT budgets to security, up from 11% last year. Ninety percent said they believe security is a major consideration for their customers and 97% believe that a strong approach to IoT security is a key competitive differentiator.
Three primary security tools emerged as the chief methods IoT vendors are relying on. One is encryption, which remains as an optimal way to protect any type of data in motion and at rest. The use of encryption rose from 67% last year to 71%, which is an impressive figure, especially when you compare it to the security landscape as a whole. As part of tracking publically disclosed breaches, Gemalto has found that encryption was only in place in 4% of the 944 worldwide breach incidents that took place in the first half of 2018.
More organizations also started protecting their devices and other technologies with password management (up from 63% to 66%). Poor password management has already been a major storyline in the world of IoT security. CCTV cameras, DVR boxes and routers with default passwords notoriously led to the Mirai botnet and others that have followed. They operate by gaining access to devices using the default password and then infecting them with software to launch distributed denial-of-service (DDoS) attacks.
Less prominent, but still very promising, is the increased use of blockchain to protect IoT networks. Blockchain, in its simplest form, is an immutable distributed ledger which provides cryptographic assurance of its data and transactions. There is a natural fit for this technology with IoT — a distributed collection of devices that send and receive data. Blockchain can be the enabling platform used to establish trust and provide security and privacy at scale, resistant to single points of failure. In the study by Gemalto, the number of respondents using it rose by a ten percent — from 9% to 19%. Twenty-three percent said they would ideally use the blockchain for the purpose of authentication, while 91% of those who don’t use the technology would consider using it in the future.
Also somewhat of an interesting development is the number of people who believe governments should be doing more to regulate IoT security. Nearly all (96%) believe there should be laws in place, while 80% go so far as to call on governments around the world to provide more robust guidelines for the industry. Security ownership is a key theme here — 59% said they think regulations should clarify who is responsible for IoT security.
Not so encouraging
In spite of ramped-up security efforts, less than half of companies (48%) are able to detect if any of their IoT devices have been breached. This is an obvious concern, given that the rising number of connected devices represents a growing attack surface for hackers to exploit, not to mention the fact that an undetected data breach can cause serious damage.
Data privacy is also an issue. Even as the public has grown more aware of data privacy issues, 38% of respondents to the survey said privacy presents a challenge to their organization. Thirty-four percent said they experience challenges associated with collecting large amounts of data from connected devices. Only 59% said they encrypt all data in their organization.
Like most of us, respondents to the survey were also consumers of IoT. Sixty-two percent said they believe that the security of their IoT devices needs improving, while 54% said they have privacy concerns about their IoT use.
Where does that leave us?
Ultimately, it’s clear that there is a growing appreciation for the importance of security to IoT. In the past, we have advocated for a security-by-design approach — in other words, building security mechanisms into IoT technologies as a foundational piece of their development. According to this survey, the number of organizations taking such an approach rose from 50 to 57% this year. This is one of the most encouraging signs of all.
With that said, there is almost universal acknowledgment that the government policy and mandates would accelerate the adoption of proper security practices in IoT. If the recent privacy/data scandals are any indication, the private sector may need a nudge to avert serious damage from a major privacy or DDoS disaster.
In the meantime, we still have work to do. The number of connected devices is on track to hit 20 billion by 2023. Anyone who is a part of this ecosystem, whether you’re manufacturing devices, writing software or crunching IoT data, needs to continue improving data protection, breach detection and mitigation capabilities.
A truck leaves a job site with a problem that will not be discovered until later in the day: a ladder was never returned to the truck where it belongs. It will take hours to discover it is missing and perhaps a day or two of searching before the decision is made to replace it.
Non-powered and medium-value assets can be as important in construction rentals as heavy equipment and cargo containers. Because they are smaller (and there are often more of them), they can be more difficult to track. Manual inventory tracking is an inefficient and time-consuming process. Often the known location is only as good as the last person to touch a piece of equipment and does not automatically take into account an asset’s current location.
For construction rental companies, tracking a large number of tools and equipment can make or break a business. Knowing where every drill, chainsaw or ladder is in real time is crucial to the ability to fulfill customer requests — or, as it’s known in the industry, “staying in business.” Asset tracking is the key to inventory and the prevention of loss, whether actual or perceived.
Perceived loss occurs when a piece of equipment is logged in inventory but no one can locate it. In some cases, perceived loss is worse than actual loss because of the time personnel spend determining whether the equipment has been lost, stolen or destroyed — or is only misplaced. Even if this represents a small amount of time per day, it wastes worker time that could be spent on billable projects and this non-productive cost adds up over the course of a year. When you cannot locate a piece of equipment for a rental, you are giving your customer a reason to go elsewhere. Trying to solve the problem by hurrying to source new tools or pieces of equipment quickly can be expensive and difficult due to limited availability — and there is always the possibility that you are buying unnecessary duplicates of assets that are merely misplaced. Losing track of these smaller pieces of equipment wastes both money and time.
Advances in IoT technology, however, mean there is now a solution for this too common problem.
Smart sensor technologies, such as Bluetooth Low Energy tags, supply the information needed to solve the problem when equipment goes missing: each asset’s real-time location. The tags communicate with a cellular-enabled telematics gateway installed on a truck or other mobile asset at the yard or on a job site to provide a constant stream of location data. Bluetooth is economical due to minimal power requirements to ensure the battery lasts for years.
With a sensor tag attached, the whereabouts of an asset is available from a web-based application on a phone, tablet or computer. The asset’s current location and location history can be integrated into back-office systems, making it easier to conduct audits or analyze operational trends that help predict demand. With specific usage reports, companies can understand the importance and demand patterns of each piece of equipment, making sure every asset is used to its maximum potential and helping clarify where to invest in additional inventory.
For example, during a major weather event, demand for certain types of equipment surges as people prepare and protect their property, and again during the rebuilding process. Weather or a natural disaster can bring a spike in demand for specific types of assets. Real-time inventory helps a rental company respond quickly to customers. Knowing exactly where each item is located is key to satisfying these periods of elevated demand and urgency. Software that reports inventory usage and movement in the aggregate can also help companies plan for next year’s weather events.
These tags and asset tracking software can also trigger alerts when equipment enters or leaves a specific area. By setting a virtual boundary, or geozone, on a map, a company can monitor the movement of assets in and out of their yard, a job site, a supplier or any other area where equipment travels and send alerts to the people who need to know about equipment movement. When a truck leaves a job site, for example, you will know whether the ladder is on it — and can take action to make sure it is returned on time. Geozones are especially helpful in preventing actual loss. When an alert comes in off-hours, the right people can know immediately that something is amiss. This helps prevent theft, unauthorized use or just a simple misplacement.
Implementing a program based on tracking sensors lets you focus on running your business; makes integrated, automated inventory possible; and prevents the perceived loss of equipment that may only be hiding in a warehouse or at a job site. IoT technology assists in raising the visibility of large and small assets, optimizing utilization and providing accountability to help prevent lost productivity, unauthorized use and theft. Real-time tracking, coupled with historical reports, provides key information that can help you make important operational decisions and predict demand.
The recent hijacking of thousands of printers to print out propaganda for the popular YouTuber, PewDiePie, gave us insight into the direction of the IoT ecosystem and what security breaches mean in an increasingly connected world. The attack, while generally harmless, underscores the need for cybersecurity protocols and policies that address the vulnerability of IoT devices and their increasing potential to cause damage if they are compromised.
The tremendous growth in smart, connected devices in our industry, homes and on people shows no signs of slowing down. There are now far more connected devices in the world than there are people. Everything from personal devices to industrial ones, the growing ubiquity of always-connected devices is leading to a modern-day gold rush of sorts. The commodity in question is not gold — it’s the data these devices generate. Data that, with analysis, can highlight trends and behaviors that allow for a vast new range of use cases across the IoT ecosystem.
Ensuring the security of these devices is already at the forefront of cybersecurity. However, maintaining the privacy of this data gets less attention, despite being critical to continued consumer trust in these technologies.
In many cases, data from a specific device is sent to and stored in the cloud. Often, an IoT device manufacturer will have a data repository hosted in one or more cloud service provider (CSP) instances. All data generated from individual devices is sent to these repositories for storage and analysis. This data requires protection not only at rest, but also in transit. That’s an area where the innate security built into cellular technologies, end-to-end, can play a pivotal role.
Innate cellular security
Cellular networks are less permeable simply because they tend to have fewer connected IoT devices than Wi-Fi and wired networks, since many always-connected devices employ Wi-Fi connectivity. The most common cellular networks also require authentication to connect to the network, even if that authentication is automated with hardware. Many Wi-Fi and wired networks require no such authentication and therefore present far more vulnerability.
In addition to authenticating connected devices, cellular data is more difficult to intercept. Grabbing an RF signal or creating a fake, malicious cellular network requires more hardware than a computer with a Wi-Fi card. The inherent security feature is that there are fewer bad actors attempting to break into cellular networks since other network types offer easier access to just as much data.
Cellular technology can also play a more active role in securing data. When static accounts are compromised, mobile devices are usually unaffected. So, cellular technology offers protection through two-factor authentication or as part of a three- or four-factor authentication system.
IoT manufacturers can exploit the security of cellular data transmission by performing device-to-device communication with cellular connectivity. This reduces the number of devices on wireless networks and minimizes the surface area for cyberattacks.
In the past, transmitting large quantities of data exclusively through cellular networks was too slow to be practical. As cellular technology has improved, networks built entirely on cellular data transmission have become viable, and companies have built private cellular networks to reap the security benefits of cellular technology. But even when data is not transmitted over purely cellular networks, data collected by IoT systems is more secure when cellular technology is part of the equation.
The move to private LTEs
While cellular networks may be more secure, some may argue that their support for IoT is limited by cost, spectrum availability and their prioritization of mobile devices. They simply were not designed to handle the growing diversity of devices (the advent of 5G technologies will go a long way in addressing this). This is why wireless networking is still a fragmented landscape in business-critical domains.
The concept of private LTE networks then becomes a viable option, enabling IoT-specific connectivity for organizations with clusters of IoT devices and a need to transmit and store collected data in CSP instances.
While commercial LTE networks are typically focused on mobile consumer needs, private LTE networks can be set up relatively inexpensively. These LTE networks provide the range and bandwidth for device-to-device communication and data transfer to a larger backbone network where data can be aggregated and transferred to a CSP instance for storage and analysis.
This means that the ground-level networks, where a majority of the data is freely transmitted, are less permeable. The only place where the data is vulnerable to traditional cyberattacks is after it has transitioned to an IP network, where private connections to the CSP environment can be provisioned to minimize vulnerabilities. This innate security also reduces the vulnerability of backbone networks, since it minimizes the risk of a breach at the last mile of the data pipeline.
Essentially, private LTE networks provide a more secure environment for IoT data, and protect backbone networks, while the data is still in use by IoT devices, where it would be most open to attack on traditional Wi-Fi or wired network.
Moving to cellular
The tendency of many companies and IoT manufacturers is to default to non-cellular networks for internal and external data transfer. But these networks will continue to become more penetrable as IoT grows and more devices present access points to the network backbone.
IoT device manufacturers can improve their own data security and drive a more secure future for IoT as a whole with a transition to private LTEs and end-to-end encryption for transferring data to and from CSP environments.
These days, everything is “smart,” from the IoT toaster to internet-connected toilet paper dispensers. While uses of these devices are limited, their existence points to the increasing availability of resources that enable more important pursuits. Sensor, communication, storage and computing costs are rapidly decreasing, meaning it’s now possible to collect vast amounts of data from sensors attached to expensive equipment like oil and gas rigs, earth-moving tools and factory machinery.
It’s cheap and easy to collect a lot of data, but getting value out of that data is the challenge. In order to use this data to optimize physical assets, machine learning is essential. A recent client engagement of ours demonstrates why.
The client designs, manufactures and leases industrial equipment. The company wanted to remotely monitor products and alert teams about units at risk of failure before they fail, resulting in reduced downtime. We helped our client build a prototype machine learning system to address this problem.
We began with understanding, the first step of our lean AI development process. Understanding has two big components: business understanding and data understanding. On the business front, we quickly understood that reducing downtime and turning unplanned maintenance into planned maintenance was a big client business driver. On the data-understanding front, we audited three years of client data.
Going into the project, we thought that we had all the data we needed to build the predictive system. But we discovered that the root cause for many failures was not documented in a consistent manner. As such, we decided not to use this data because it was not clean enough.
So, we pivoted from trying to predict high-cost failures, like engine breakdowns, to trying to predict longer outages. Although this had slightly less business value, it was solvable and could still improve our client’s operational costs. Taking time to thoroughly understand the data before you begin using it can save you time and wasted effort.
Once we understood what data we had to work with, and the shape it was in, we were ready to move on to the next step in the lean AI process: engineering.
In this project, engineering was largely about extracting, transforming and loading data to make it useful for machine learning. As always, we took a production-first mindset and created a scalable data pipeline that merged several data sources, including real-time streaming sensor data, machine metadata from the ERP and weather data.
The pipeline was developed using a combination of tools, from Apache Spark to Dask to HDF5 files. The use of big data tools was necessary because of the volume of data being processed — both at training and inference time.
We then moved on to the third step of lean AI: modeling. We built a simple baseline random forest model using features hand-engineered from the raw data. Feature engineering incorporates domain expertise into inputs that feed an algorithmic model.
We needed to create appropriate inputs for the time series forecast problem. We worked alongside the company’s mechanics and engineers to identify features such as pressure and temperature ranges. Once we had our baseline model working, we added features and tried more complex modeling techniques like 1D convolutional neural networks. We found that a random forest with appropriately tuned features outperformed more complex models.
In the final part of the prototyping process, we focused on the fourth step of lean AI: user feedback. Working with our client’s software engineering team, we prototyped a simple spreadsheet tool that mechanical engineers could use to consume the daily failure predictions. Then, we held a number of working sessions with mechanical engineers. We discovered many interesting nuances to the problem that led us to adapt the model and post-process the raw predictions into a more useful state. For example, we found that many units were being operated outside of the recommended operating range, so they were more likely to fail. Showing those at the top of the list was not particularly informative; rather, we found that looking at changes in the baseline failure rate for units was more interesting. We adapted the spreadsheet to highlight significant daily changes instead of absolute failure rates.
Our client ended up with a working prototype that gave daily failure predictions and built the capability to continue developing the prototype.
Although IoT systems can quickly arm you with massive quantities of data, it’s important to remember that there must be a method to the madness. There is value to be had from sensor data, but it’s hard to get it if you just shoot from the hip because you’ll flail a lot. By understanding the business problem and your data upfront — and ensuring that they’re well-matched — you’ll save a lot of backtracking. Then, follow a methodical process such as lean AI to get to value quickly.
If you’re developing, producing or selling IoT devices, it might be a good idea for you to get acquainted with the term anisotropic conductive film, or ACF for short. You don’t hear much about it. However, it plays a big role in the manufacture of small printed circuit boards (PCBs), especially when IoT products are made of rigid-flex circuits.
ACF serves as the interconnection between the rigid and the flex circuit. There are other methods for performing this interconnect, but ACF is gaining momentum in the market due to its flexibility and reliability. Connection between the rigid and flex circuits is made using a thin film of polymer and tiny conductive polymer balls. This process requires the right tools and the right levels of temperature, since both circuits can only take a certain amount of temperature.
Two other considerations must be taken into account during manufacturing. First, manufacturing engineering must assure that when performing the interconnect that current capacity is successfully transferred to the rigid circuit, and vice versa to the flex circuit. Second, size and composition of the polymer balls and film must be carefully scrutinized and, in effect, make sure the ACF material is optimized.
It also takes seasoned PCB manufacturing veterans to properly control the ACF process, largely due to the temperature cycles involved. For utmost reliability, assurances must be made to avoid increasing the temperature too much because the conductive particles of polymer have certain gold and nickel content. This makes ACF susceptible to certain temperature cycle changes. If the process engineer doesn’t properly define temperature cycles suitable for polymer ball size and composition, then the ACF interconnect bonding will either fail or result in poor reliability with a high probability of latent failures in the field.
As for its flexibility, ACF offers IoT device OEMs several applications. It can be used with chip-on-board, which means a bare die or chip can be mounted on the board with the ACF process. It can also be used on chip-on-flex, which is similar to chip-on-board, but the bare chip is mounted on the flex circuit. There is also chip-on-glass and flex-on-glass, as well as other applications.
Typically, the process here is to heat up the anisotropic conductive film tool to a specific temperature range. Then, aligning of the flex and rigid board takes place where the ACF bond has to take place. Subsequently, the film is placed between the rigid and flex circuits, along with certain pressure and temperature to make sure of proper adhesion to correctly make the solid joint.
One of the biggest mistakes organizations make when it comes to security is falsely believing that the internet of things is the future, rather than understanding that it is, in fact, the present. We already have numerous IoT devices on our corporate networks — they’ve just been flying under the radar. With many of our enterprise customers, it’s not unusual that we profile a third or more IP-enabled endpoints as IoT-type devices.
IoT security has not yet been assigned the same importance as safeguarding traditional endpoints, largely because there is still tremendous confusion around if and how these connected devices can become targets for malicious actors. As a result, most security teams don’t know what IoT devices are on their networks, where they reside or how they introduce enterprise risk.
The typical business environment is home to a plethora of cameras, phones, printers, copiers and other productivity devices. Any device that is connected to the internet can expose an organization to a data breach, and we’ve already seen numerous cases where cybercriminals have exploited vulnerabilities in IoT products (hacked networked printers becoming soldiers in a botnet army, for example).
While this is scary enough, the consequences of IoT attacks in other industries can be far more severe –think life-saving medical devices in healthcare or connected military systems in government. And these too are now a reality, as more traditionally isolated operational technology (OT) devices become IP-enabled and part of the network ecosystem. HVAC, mechanical and building control systems, manufacturing floor controllers and robots, and fire, environmental and security systems are now all IP-enabled. This makes OT devices rich targets for compromise, not only for the traditional reasons of industrial sabotage, critical infrastructure attacks and so forth, but also because a penetration of the OT network can open the possibility to move laterally to compromise assets on the enterprise IT network, and vice versa.
In simple terms, organizations across industries are battling an ever-growing attack surface thanks to the convergence of IT and OT networks, cybercriminals are increasingly targeting connected devices and the consequences of IoT attacks are becoming more severe. In this threat landscape, IoT security is no longer something that can be left as a future (or forgotten) concern. Rather, security teams must acknowledge that the IoT era is upon us, and embrace it in a secure and structured way.
Achieving IoT security
Traditional security technologies are not the answer to our IoT security problems. In fact, commonly used security products are fundamentally flawed at delivering the full visibility needed to secure IoT environments because they don’t provide a true picture of real-time activities across the network, between IT and OT, and in cloud environments. Additionally, most of these technologies fail to identify potential leaks and unauthorized communication paths.
To achieve IoT security, organizations must combine specialized network visibility technologies with several important best practices:
1. Gain real-time network visibility.
The biggest IoT security challenge facing organizations is a lack of visibility into what devices are on their networks as well as a lack of visibility into the networks themselves — whether infrastructure is being managed, whether there are vulnerabilities due to unknown and unmanaged systems or paths, etc. In fact, on average, Lumeta’s research in production environments shows that more than 40% of today’s dynamic networks, endpoints and cloud infrastructure are unknown, unmanaged, rogue or participating in shadow IT, resulting in significant infrastructure visibility gaps that can lead to breaches.
Making IoT visibility more complex is the fact that client-based software approaches are not possible since one can’t just install client software on these closed, embedded software endpoints to provide any telemetry. The combination of network-centric visibility and vulnerability assessments is the only possible solution.
You can’t protect the unknown, and the only way to determine what’s on your network, how everything is connected and if devices are properly protected is to use specialized technology that provides real-time IT and OT network visibility of devices, ports, cloud environments, virtual machines, etc. across hybrid environments. Real-time network visibility allows security teams to identify endpoints that are frequently missed by vulnerability assessment tools, as well as monitor for new or changing IoT infrastructure.
2. Identify leak paths.
Once the right visibility tools are in place and an accurate census of network devices is drawn, it becomes easier to identify vulnerable paths and possible “leaks” across protected zones and discover unauthorized communications to and from the internet in real time to prevent them from being exploited by a malicious actor.
Threat intelligence comes into play here, as intelligence feeds can provide security context on unauthorized leak paths, specific attack activity, misconfigurations or actual authorized changes. More than just knowing that endpoints are on the network, security teams need to have tight control over where they are, what they are doing and who they’re communicating with, at all times. The combination of full network context and best-of-breed security intelligence makes this possible.
3. Segment the network.
Networks can and should be broken down into isolated segments or zones to better control where authorized users, communications and devices can go, while disallowing unauthorized activity and reducing the attack surface. By segmenting the network in this way, even if cybercriminals or unauthorized users are able to exploit an IoT host on a network segment, they’ll be confined to that specific space rather than having the ability to move freely across other adjacent networks.
When it comes to network segmentation, there are a few important elements to keep in mind:
- Anything touching the network should be segmented by type, purpose, access rights or solution type.
- No device should be trusted unless authorized.
- Segmentation rules and policies must be continuously tested and validated.
- Active network infrastructure monitoring is important to identify changes in communication channels and network flow paths that might result in segmentation policy violations, as well as potential leak paths to the internet from OT environments.
Taking back network control
IoT attacks are already commonplace, making real-time visibility into ever-expanding, dynamic networks paramount. Only when they have a complete understanding of what’s on their network can organizations tackle IoT security effectively. With the new year just beginning, there’s no better time to move IoT security from the back burner to the forefront — and, with it, transfer control back into the hands of its rightful owner within the enterprise.
The great promise of technology has always been about making things better. Whether that means transforming businesses for success, improving communities and lives, protecting the environment or just making things more convenient, technology can be a force for good. Advances in artificial intelligence and connected devices are blurring the lines between the physical and digital, making it easier than ever before to “do good” faster and more efficiently. For example, intelligent systems like EarthRanger and Connected Conservation are transforming protected area management from push pins on wall maps into a single, integrated real-time operational platform. By combining data from various sensors with ranger observations and historic data on poaching incidents, these platforms can pinpoint and predict threats, sometimes before they happen, enabling park managers to respond more quickly. With limited resources to deploy, it’s paramount that park managers respond to real threats and not just animals breaking through the fence or going to sleep. In other words, context is everything when it comes to interpreting the data.
What is contextual awareness?
Contextual awareness is the ability for a given application to access information about the physical environment and automatically adapt its behavior appropriately in real time. The brains of living organisms are designed to process contextual information from the outside world and generate adaptive responses, such as seeking food or running from danger. Likewise, a contextually aware system is capable of detecting and anticipating, changing circumstances in the environment and reacting to them in real time with the right response. Context is any information that can be used to characterize the situation of an entity (person, place or thing). Most use cases focus on taking contextual information from the environment (computing, user, physical) and combining it with knowledge about the entity to determine what the situation is to generate the right response. Responses can range from tailored (personalized, adaptive, more precise) presentation of information and services to a user to automatic execution of a service.
Here are a few public safety examples where contextual awareness made a difference:
Natural disasters are on the rise, making it increasingly difficult for municipalities to manage spiraling costs. Wildfires burned through nearly 2 million acres of forest this year in California, the largest amount of burned acreage recorded in a fire season. While natural disasters cannot be avoided, interconnected smart cities will empower governments to do more with less and better safeguard private and public assets. In fact, interconnected technology brought a new level of precision and a data-driven approach to managing the California wildfires. Real-time data about fire conditions can be collected and blended with predictive weather data about wind, humidity and temperature to provide fire personnel with comprehensive situation awareness. Drones can be deployed and operated at a fraction of the cost of traditional methods and provide real-time updates to emergency response personnel. That’s a critical new level of insight for answering questions in a wildfire situation like where to dispatch personnel, when and where evacuations should be ordered, and where and when to deploy fire retardant.
Detecting air pollution
More than 80% of people living in urban areas that monitor air pollution are exposed to air quality levels that exceed the World Health Organization limits. Scientists at the University of California, San Diego built a prototype of an air quality monitoring system that used small, portable sensors to monitor air pollution throughout the city. The CitiSense sensors detect pollution levels in that location and transmit the air quality readings to any smartphones in the vicinity. Participants in the pilot discovered that pollution might vary by location and time of day, and many users took action to limit their exposure, such as taking a different route.
How contextual awareness hones AI performance
When applied effectively, contextual awareness frames the range of outcomes or behaviors that AI should suggest by narrowing the field of possible outcomes. This is particularly important for IoT applications, where targeted behaviors should be achieved quickly, with minimal data processing and power usage. And, as data sets keep growing exponentially, it’s also an inexpensive way to make AI systems faster and more accurate. Context is, in effect, a multiplier for data and what gives it meaning, according to IBM. The more context, the higher the value of the data.
The context-aware computing market
The context-aware computing (CAC) market is expected to reach over $125 billion by 2023, growing at over a 30% compound annual growth rate from 2016 to 2023. The proliferation of mobile computing devices and rising demand for more personalized user experiences is helping to fuel the growth. Enterprise desire to augment productivity and collaboration will also play a large part. CAC is generally categorized into the following product types: adaptive phones, active maps, augmented reality and guided systems, cyber guides, conference assistants, fieldwork, web browsers, location-aware information delivery, office assistants, shopping assistants, people and object pagers.
Other CAC for good use cases
Macro challenges, such as growing urbanization and elderly populations, climate change impacts and cybersecurity threats, are bringing heightened attention to the potential of technology to be a force for good. IBM, Microsoft, Google, the United Nations and others are exploring the ways in which intelligent, interconnected technology can benefit society. Solving such large challenges won’t be easy, however, which is why putting them into context is key. “Intelligent” behavior often has more to do with simple situational understanding than complex reasoning. Contextual awareness provides a solution to “frame” the problem, opening the door to some other fascinating innovations such as:
- Caring for the elderly with a home-based fall detection system and e-care@home;
- Lowering energy usage and preventing fires in buildings;
- Improving the safety of drivers by monitoring their health status and driving conditions;
- Making airlines safer through real-time flight tracking;
- Protecting the earth by reducing air pollution, conserving water and saving critical species;
- Advancing healthcare with remote patient monitoring, chronic disease management, reduced emergency room waits and safer operating rooms;
- Reducing gun violence by automatically detecting, locating and alerting police to gunfire; and
- Protecting maritime trade routes, critical to national security, through improved threat prediction.
All of these examples share one thing in common — interconnected IoT ecosystems that are contextually aware. While AI, IoT and CAC may be considered different technologies today, eventually they will be so interconnected that it will be hard to differentiate them. Xerox PARC visionary Mark Weiser may have put it best when he said, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”