The internet of things is becoming increasingly ubiquitous in American society. Smart home devices control our lighting, heating and entertainment systems. Health monitoring devices alert caregivers the moment medical issues arise. IoT-connected vehicles are now commonplace. Even traffic patterns and manufacturing processes are routinely managed using the network.
Unfortunately, the added convenience of IoT brings with it a host of new security concerns, and many of the IoT devices in widespread use today remain dangerously unprotected.
The government has now turned its attention to this issue. In August, the FBI issued an alert titled “Cyber actors use internet of things devices as proxies for anonymity and pursuit of malicious cyber activities,” which warned both the developers and owners of IoT-connected devices of the security vulnerabilities present throughout the network. It urged them to enact safeguards to address these vulnerabilities and, not long after the alert was issued, California became the first state in the U.S. to pass a law regulating the security of IoT-enabled devices.
SB 327 indirectly sets the stage for the future
Government intervention in IoT security comes as no surprise, especially since the stakes are so high for privacy and safety. SB 327 (Information Privacy: Connected Devices), slated to go into effect January 1, 2020, dictates that any manufacturer of IoT or smart devices ensure that the appliance has “reasonable” security features that “protect the device and any information contained therein from unauthorized access, destruction, use, modification or disclosure.”
The vagueness of these terms ultimately leaves the law open to significant interpretation, but that open-endedness can be viewed as a positive. After all, in an industry that moves as quickly as IoT development, the government is hardly in a position to prescribe specific technologies that might be obsolete before the law even goes into effect. Instead, SB 327 institutes a framework within which security experts can operate, creating opportunities for companies to quickly bring to market next-generation security and authentication tools.
The need for IoT security
Despite not being in a position to recommend specific technologies, the government is keenly aware of the reasons IoT security is important. In fact, one of the “reasonable measures” called for in SB 327 is that default passwords must be unique to each IoT-connected device — an obvious reference to the Mirai botnet, a malware capable of infecting a wide range of IoT devices.
Mirai is an excellent example that serves to highlight one of the most widespread vulnerabilities of the IoT network: While data breaches can put consumers’ personal information at risk, poor IoT security can leave devices open to additional types of attacks. The Mirai botnet is able to take control of IoT devices using factory-default usernames and passwords, which can then be used as part of large-scale distributed denial-of-service attacks. Because these default settings can be common to entire product lines, this leaves a staggering number of IoT devices open to this type of breach.
Although Mirai is now well known to cybersecurity experts, the simplicity of the vulnerabilities that it exploits make it difficult to fully eradicate. The reference to Mirai highlights not just the danger of that specific botnet, but the idea that even the smallest lapses in IoT security can leave entire product lines vulnerable.
Of course, botnets like Mirai aren’t the only threat, and even companies on the cutting edge of technology can discover startling IoT vulnerabilities. Earlier this year, Tesla Motors discovered that the key fobs issued with the company’s Model S vehicles were susceptible to cloning, providing tech-savvy car thieves with an easy exploit. Although the oversight has since been addressed, the revelation served as a sobering reminder that no one using IoT technology is immune. For many companies, foundational security improvements must be made.
Concrete steps for a secure future
SB 327 calls for “security procedures and practices appropriate to the nature of the information.” This means that the prescribed safeguards will likely vary depending on the technology being used and the information at risk, putting security experts in an ideal position to advise companies on the most effective and appropriate safety measures to take. Although the law declines to recommend specific steps for developers of IoT-connected devices, there are a number of ways to begin improving your IoT security before the law goes into effect:
- Work with a proven security partner you trust. The IoT security landscape is complex, but there are credible experts standing by to help.
- Stop using static credentials for authentication. Usernames, passwords and weak symmetric tokens can leave secure data unnecessarily vulnerable.
- Use strong authentication based on digital identities that are renewed (not static) and difficult to steal. This should increase trust and confidence within the supply chain and across the entire ecosystem (including vendors and customers).
As consumers grow more knowledgeable and discerning, IoT developers will need to get serious about security. The California law won’t go into effect for just over a year, but it is almost certainly the first of many laws that will work to ensure that customers have a reasonable expectation of protection. Understanding both the vulnerabilities inherent to the internet of things and the steps that can be taken to mitigate them is a critical first step to a more secure future.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Earlier we talked about wire bond testing for IoT printed circuit boards and the types of wire used to connect a bare chip to a substrate or the PCB surface itself. This discussion deals with wire bond loops pull testing and verification, which is critical for IoT device reliability.
Your immediate reaction is probably: What loops are you talking about? To the average everyday person, it would appear a wire goes from point A to point B in a straight line, right?
Actually, when dealing with wire bonding, the wire itself creates a loop or a wire arc between the bare chip and the substrate or PCB surface connection. And therein are a series of failure modes in the wire bonding process for IoT devices. You have to remember these are very fine, very delicate wires. Wire bond loops demand precise pull testing and well-trained technical personnel, as well as wire bond pull tester for reliable wire bond joints. This expertise is critical to quickly spot problematic areas during the assembly and manufacture of IoT PCBs, including rigid and combination rigid-flex circuits.
There are eight modes of failure for wire bond loops. Savvy electronics manufacturing services provider houses assembling and manufacturing IoT PCBs are well supplied with highly reliable wire bond testing for these loops to assure that an IoT PCB doesn’t incur any of these failure modes.
Wire bond loops can break from different segments of the wire length. One of the weakest points of a wire bond is at the tip of the loop where it’s been formed; that is part of the reason it’s the weakest point. This is known as the neckdown breakpoint and is the most common.
Another is a mid-span break somewhere in the vicinity of the middle part of the wire. Two other failures are known as failure in bond. One occurs at the interface between the wire and metallization at the bare die, and the other is at the interface between the wire and metallization at the substrate, package post or other than the bare die or chip.
Two other failures are due to lifted metallization from the die and from the substrate or package post. The last two failures can be due to a die or substrate fracture.
There are a number of reasons these failures can occur in IoT PCB assembly and manufacturing. A major one is the PCB’s surface or the substrate isn’t sufficiently clean. Plasma etch cleaning using argon gas must be used here so that all the residues of any oxidation are completely removed and the surface is 100% clean. This way bonding is reliable and sturdy.
There can be other reasons for wire bond loop failure. The quality of the wire may be bad, or it may be contaminated. Programming the wire bond pull tester may not be properly done so that it’s not optimizing the loops. As a result, loops are created that aren’t sufficiently study for maintaining the bond under a certain test force.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In part one of this series, I discussed the mindset needed to get started on building an accurate and successful predictive analytics pipeline. Now, let’s dig into the steps we’ve found to be universal in building effective predictive analytics.
Step one: Get your data ready
Laying infrastructural groundwork is always required to enable rapid deployment of new analytics in the present, and in the future. This is a substantial effort — transforming data to prime the models, building scalable infrastructure that enables efficient orchestration of analytic workloads, aligning ingestion/egestion pipelines to data profiles and so forth — but we view this as a one-time effort that we undergo with our customers on their path to digitalization. For the sake of brevity, let’s assume that work has been completed and focus on building a single predictive analytic.
The first step is to ensure that the data that you will use to build your models is actually usable. While some data sets will check the IDA (initial data analysis) boxes in terms of forensic quality, if the data isn’t trustworthy or useful to your end user it won’t be of any use to you as well. To identify what is “useful,” you should be seeking end-user support in the form of quality assurance as well. If done correctly, it will also reduce effort in the data cleansing process.
Step two: Identify your ‘problem’
Once the data is staged, evaluate the end users’ current state of needs. What specific problem are they trying to solve? Where will predictive capabilities enhance your stakeholders’ ability to “act” before a problem arises? And where will an analytic provide the greatest “improvement” from the current heuristic model?
These improvements can take the form of 1) accuracy or 2) efficiency of prediction. Make sure that the model seeks to improve performance on one or both of these vectors, and that the expected improvement is meaningful from the stakeholder’s perspective. For instance, while improving performance of a heuristic model by 33% is meaningful to a data scientist, no COO is going to approve investment in a new predictive model if it will likely reduce churn from 0.6% down to 0.4% and the impact to his bottom line is minimal.
Step three: Define business impact
So, how do you get in the COO’s good graces? One important way is to identify how business impact will be measured, find the metrics that are mission-critical to the business and build your models to help improve those metrics. Driving the outcomes that matter to the stakeholders will result in a higher likelihood of adoption by these stakeholders.
This exercise will be a combination of understanding the end-user behaviors that you want to enhance or influence to improve outcomes, and then identifying how (and where) data can be surfaced in a manner to drive these behaviors. These outcomes will likely have KPIs associated with them — that’s a great place to start.
As an example, in our utility world, grid reliability metrics (SAIDI, CAIDI, SAIFI, etc.) have been particularly compelling as our stakeholders seek to optimize them regularly. So, to drive the most impact for your stakeholders use these metrics as the needle that you need to move.
Step four: Build it
At this point, it’s time to build the analytic. You will need to connect:
- Your data that has been staged for the analytic build;
- The problem that is needed to be solved and the success metrics associated with that problem; and
- The data science/analytic tools at your disposal.
There are a ton of assorted flavors of tools to use for different analytics. In our day to day work, we’re often using the simpler techniques, such as anomaly detection, linear regressions and forecasting, to solve our problems. But as automation and more advanced means of analytic abstraction become ubiquitous, our ability to dive into more advanced techniques such as neural networks and other forms of deep learning are enabled as well.
That being said, it is critical that the tools and techniques being used are not chosen based on their sophistication, but rather fit with the problem at hand. Solving a problem simply does not make it any less impactful. Remember that, especially when faced with resourcing challenges or aggressive timelines.
Step five: Optimize!
Finally, map out expected ROI for use of the model. This will be helpful in model tuning and optimization. The ROI exercise usually entails understanding costs of Type I and II errors, as well as the benefits of the true positive prediction. Knowing these will help tune optimal sensitivity to (and the right balance of) precision and recall in the results. It’s also the first step toward more powerful prescriptive analytics that not only predict outcomes, but can suggest optimal paths toward resolution or reconciliation.
These ROI calculations also come in handy as a powerful marketing tool — showing precise impact will help with adoption of the analytic and can lead to opportunities to build more analytics.
And there you have it. Predicting the future is not easy, and accurately doing so requires extreme levels of precision, skill and quality data. But if you start with the right data, need, ROI and the right tool, you’ll soon be reaping the many benefits predictive analytics has to offer.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
To meet IoT product design challenges head-on requires thinking through multiple development hurdles early on. Designing an IoT product can be challenging: There are cost and time-to-market considerations on one hand, and on the other is the need to ensure that the final product has a user-friendly interface and runs smoothly. Ideally, a good design approach results in the best of both worlds.
These considerations were important in the efforts to design the iKeyp, a smartphone-enabled personal safe that protects against medication diversion and can be placed securely within a kitchen or medicine cabinet without the use of tools. An IoT product, the device features a keypad where a security code can be entered and an accelerometer that detects movement which, if triggered, sends alerts to the owner’s smartphone or smart device linked to the iKeyp.
Our many years of experience designing IoT products has taught us that enabling smooth communications between the application software and the device’s embedded firmware, as well as meeting key requirements that will make an IoT device easy to use, is often difficult. In designing the iKeyp Smart Safe, we faced and overcame similar challenges and are sharing details from design approach here.
Align competing requirements with underlying goals
According to Jan Niewiadomski, senior director of systems engineering and architecture at IPS, when first approached to help design the iKeyp, one of the biggest struggles was to meet the battery life requirements of using AA batteries that could last for six months while also enabling Wi-Fi connectivity. “The technical reality was,” Niewiadomski said, “that 24/7 Wi-Fi connectivity would drain the AA batteries within days.”
To address these competing requirements required a holistic approach. First and foremost, we wanted to make sure that the design we devised met the overall goal, which was to provide consumers with an easy, convenient, affordable and secure way to store and remotely access prescription medicine and other small valuables.
To meet these goals, and to meet cost targets, we were careful to select and package components in a way that would minimize parts and wiring to reduce costs and maximize usable internal space. Niewiadomski also had his team take a hard look at all aspects of power management, including how the device woke up, how and when it would be used, and how it would connect to Wi-Fi. Careful consideration had to be given to the location and range of the antenna. By minimizing communication to the app and the cloud services to only once a day or only when the device is touched or jostled, we were able to meet the battery life requirements while still providing a good user experience.
Collaborate to connect application software to device firmware
Another design challenge that frequently crops up in IoT product design is connecting application software to the device’s firmware. In the case of the iKeyp Smart Safe, it was important that the safe owner could use the app for security alerts if, for example, someone is trying to break into the safe, but also to enable remote access to the safe and/or its settings. This required communication between a third-party application (developed by another company) and the safe’s firmware.
We started testing with simple messages to work out the details. Similarly, for testing the app, we used an intermediate software that both sides could test with, so that we could independently confirm where the bugs were — cloud side, app side or in the embedded firmware. As a best practice, our teams work closely together and collaborate frequently, both internally and externally, throughout the process.
Make key strategic decisions early on
Another key factor in being able to get products to market quickly while containing the costs involved, is making strategic decisions as soon as possible during the process. When designing the iKeyp, we decided in the beginning stages of development that we would use AWS as our cloud provider. Making this choice early on allowed us to reduce risk by selecting a manufacturer and a Wi-Fi certified module that also supported AWS. Choosing a certified module was also an important early stage strategic decision. While it added a little more to the design cost, it reduced the FCC certification costs by about 80% and, more importantly, sped up the FCC approval process by three to four months.
Test as early and often as possible
To help ensure that the product can move from design to execution without delays, we also built several prototypes. The prototypes allowed us to test the full functionality of the product before the physical device was ready. For example, when the keypad is touched and the safe door opens, a sensor detects the open door. However, without a door or a physical unlock available we used a sandbox testing environment on AWS to test these features and make adjustments before production.
Addressing the multifaceted design challenges inherent with IoT product development requires spending significant time in the requirements stage, weighing design, execution and strategic considerations. By accurately planning and refining the design early on and working out technical bugs before proceeding to production, a successful IoT product can be developed and brought to market more rapidly and cost-effectively.
This is the second in a three-part series on IoT, location and TDOA. To read part one, click here.
Time difference of arrival (TDOA) in low-power wide area networks (LPWANs) has some common, as well as unique, challenges with respect to accurate localization of devices in the real world. These include:
- Variable density and spatial diversity of network gateways/base stations;
- Urban morphology that can limit the hearability of devices causing potential erroneous measurements;
- Multipath signal detection;
- Need for smoothing and debouncing;
- Precise timing drift/inaccuracies; and
- Precise knowledge of gateway locations.
Each of these variables can contribute to performance degradation in accuracy of localization of your devices. Network-based location computed from timing measurements can be a very low-power path for providing localization, but you need to be aware that performance will vary by network and network provider in addition to the above technical challenges.
Ensure you perform or are presented with enough evidence that a given system will meet your real-world needs. Coverage and deployment numbers are not always sufficient to indicate what level of accuracy a given area of coverage will provide.
Take urban morphology, which describes the landscape, building structures, terrain and so forth that may affect signal propagation, reflection and occlusion. The performance of all localization technologies is impacted by these features to some degree. For example, GNSS suffers from “urban canyons” in which dense forests of tall buildings present visibility and multipath signal reflections that can decrease GNSS accuracy in these areas.
LPWANs are no exception to this rule. In order to determine accuracy and availability of a given location technology, we always look at varying our test cases to measure different performance capabilities as in the below example test site selections.
If you are doing your own testing, ensure you choose test locations in diverse set of urban morphologies as in the figure below.
There are additional techniques that are available for improving the overall performance of TDOA:
- Hybridize timing measurements with power-based measurements and signal-to-noise measurements to enhance accuracy in gateway diversity challenged environments; and
- Stationary detection and convergence techniques.
For example, we have employed these techniques to successfully reduce error to ~150 meters at the 67th percentile using single location attempts while in an extremely challenging deployment environment. In addition, we have demonstrated the value of convergence in a smoothing algorithm for higher accuracy of static devices — achieving 50-meter accuracy for these devices using the same set of single location attempts.
Let’s take a bit of a deeper dive to show how some of these techniques can be applied in real-world situations.
The test area was chosen to test localization performance in a particularly challenging environment. In this scenario, the challenge was primarily due to the physical deployment of the gateways in which they were deployed in a straight-line, “string of pearls” configuration. This configuration is challenging in that the TDOA solver can only solve in one dimension — along the line of gateways. The lack of the gateways in the second dimension limits the overall accuracy that can be achieved when using TDOA. Fortunately, a power-based method allows it to resolve location in the second dimension.
The figure below shows the following:
- Gateways in blue;
- Fixes derived based on TDOA only measurements (yellow pins) for two test points; and
- Fixes derived based on power-based measurements only (pink for point 1 and white pins for point 2).
The resultant hybrid TDOA location is shown in the following diagram for the two static test locations:
As mentioned, by taking advantage of multiple packets over time, we have shown we can reduce this error to ~50 meters for static devices.
An example of the convergence process in action is shown in the figure below. The figure shows series of single-shot locations for five devices (each device is a different color) and the estimated error after smoothing at any given moment all fixes for a given device. For example, the plot shows that the orange curve (device) has an initially large error of almost 450 meters, but very quickly converges to 50 meters by smoothing the initial outlier/poor fix. As mentioned, TDOA measurements are very sensitive to multipath error, requiring that erroneous measurements to be weight and aggregated with caution in a location solver.
As the convergence algorithm receives more information, it begins to converge on the true position as shown below.
For the full animation, click here.
While IoT is more popular than ever among enterprise technology teams, some industries have been slow to invest. Unlike MOBI’s consumer packaged goods (CPG) sector customers, companies without a mobility management partner are hesitant to sacrifice the large amount of time, money and labor required to deploy IoT. In fact, the CPG industry overall ranks second to last in terms of IoT spending versus total revenue. Less than three-tenths of one percent in revenue generated is invested into IoT, while globally the average industry invests more than four-tenths of one percent — or more than 25% in additional revenue than CPG.
Also, IoT endpoints have historically been used by retailers to better collect, analyze and interpret consumer behavior to improve the customer experience. Unlike retail, inexperienced CPG mobility programs are more likely to rely upon in-store audits and partner-shared insights to understand consumer behavior instead — making IoT feel more like a luxury than an absolute necessity where gaining market insight is concerned.
However, things are starting to change. CPG companies are uncovering new, valuable uses for today’s IoT technologies that weren’t possible a few years ago. Here are eight examples that show how IoT is benefitting businesses in this sector:
1. More personalization
IoT offers CPG companies tremendous advantages and new product personalization options. By creating new channels to collect and understand market data more deeply, industry players can use these technologies to increase customer interaction, satisfaction and loyalty with specially designed offerings.
Some organizations are even combining offline tactics with IoT to better enable customers and increase sales. Advanced systems can detect when someone is browsing an out-of-stock product online and automatically offer directions to a nearby store that has the item in stock along with a discount coupon to make up for any inconvenience.
2. Less delay
Sensors and other mobile endpoints can help CPG eliminate traditional manufacturing and supply chain gaps. Relevant stakeholders can now be alerted immediately if anything goes wrong with real-time data streams and statuses attached to individual product shipments.
Predictive maintenance tasks fueled by IoT systems also greatly reduce the likelihood of unplanned network errors and accelerate company response times should an issue arise.
3. Better collaboration
IoT fuels stronger, more meaningful CPG relationships with retailers by creating a chance to collaborate and co-invest in tech-driven initiatives. In doing so, both parties aim to eliminate out-of-stock scenarios and improve product availability, leading to long-term strategies and success.
4. Enhanced insights
An influx of new consumer data enables the CPG industry to identify behaviors, patterns and trends that companies couldn’t reveal otherwise. That means smarter spending and product development decisions that align with market demands.
Organizations with a digital sales presence will even be able to use IoT to suggest products, offer discounts and push notifications to online shoppers as they browse offerings, increasing the potential for add-on sales and enhancing the customer experience.
5. Real-time tracking
Moving and transporting goods also becomes more accurate and aware with IoT’s integration. Advanced sensors can help CPG enterprises monitor real-time fluctuations in temperature, product status and so forth to optimize operational processes and potentially create more effective, efficient workflows.
6. Smarter stocking
Smart shelves and inventory stocking systems use IoT to make continuous product updates that alert CPG organizations when item levels are low. This not only gives retailers the ability to avoid empty shelves and dissatisfied customers, but also helps CPG companies replenish products before a competitor has an opportunity to replace it.
7. AI-driven assistance
When combined with artificial intelligence, IoT systems give CPG enterprises the ability to scan products and streamline inventory management tasks. These enhanced technologies can even automatically recommend products to digital consumers in a way that maximizes sales and the impact of special promotions.
8. Global security
Through RFID and a GPS, IoT makes it possible to track products at more in-depth levels than ever before. CPG companies that use these systems ensure accurate and timely deliveries while simultaneously minimizing theft and loss incidents.
If these benefits sound too good to pass up, an IoT initiative may be in your organization’s not-too-distant future. If you’re considering an advanced device deployment for the first time, however, keep these three things in mind:
While there are impressive IoT technologies capable of vast functionality, ultimately the success of any enterprise deployment depends on the digital maturity of the people interacting with it. Even the most advanced systems fail if workers can’t figure out how to use them or aren’t willing to try.
Since IoT is expected to impact consumer behavior, employee productivity and HR management, CPG companies need to formulate strategies and carefully implement these new devices.
Enterprise workflows will also be impacted by IoT’s workload. Some processes will need to be redesigned or combined with others to make these advanced technologies perform to a level that satisfies business needs.
Data that’s collected and processed by connected IoT endpoints empowers decision-makers and supply chain leaders with information to satisfy customers and strategically grow revenue.
The speed and fluctuating types of data generated by IoT systems can be challenging for CPG organizations to manage and secure. Unless a business is willing to invest in internal data storage systems, be sure to research innovations like hype data technology, additional security layers and data storage facilities to have a plan in place before deployment starts.
As mobile technology grows more advanced and integrated within the CPG sector, IoT will help these companies drive innovation and productivity. While the business benefits can be tremendous, it takes careful planning and a strategic approach to make these initiatives impactful.
McKinsey’s recent article, “What it takes to get an edge in the internet of things” advised firms to focus on three habits:
- Begin with what you already do, make or sell;
- Climb the learning curve with multiple use cases; and
- Embrace opportunities for business process changes.
The management consultancy’s analysis of different financial indicators concluded that organizations stand to benefit by implementing a greater number of IoT use cases; the effect levels off at around 30. Its survey also showed that that leading companies implemented on average 80% more IoT applications than laggards.
Could such an approach, driven by bottom-up change, significantly transform a firm’s competitive status? Or is it more a case of playing the probabilities for quick win gains?
Here is why it is important to prepare strategically. Everybody accepts that IoT technologies and innovation will change our world. As far back as 2015, McKinsey reported that IoT will significantly alter broad swathes of the economy. Many businesses will be affected, either through direct competition or because of indirect business model innovation. And these won’t simply be short-run impacts. Any attempt to master IoT capabilities and systems, through good habits, needs to address a firm’s long-term model. Businesses should not stop at quick-win or high-priority technologies, but build toward longer-term ones.
The need to manage short-term imperatives while not ignoring long-term implications is not a new phenomenon. Large organizations frequently squander good ideas. This is the kind of situation that saw Blockbuster, Kodak and Nokia lose their respective market positions. In the case of Blockbuster and Kodak, both had a decade of notice of the impending changes bearing down upon them, but could not bring themselves to make the changes necessary to take advantage of new opportunities.
What does a holistic IoT strategy look like?
Most medium and large companies will be home to hundreds of use cases and IoT applications. Instead of approaching these in piecemeal or silo fashion, it makes sense to anticipate commonalities across the roughly 30 use-cases highlighted by McKinsey’s research. One type of commonality deals with IoT knowhow. Another area covers the IoT platforms necessary to manage connected devices and applications. Targeting commonalities is also known as “thinking horizontal” and delivers economy of scale benefits.
The Industrial Internet Consortium describes promising technologies in the form of “Centers of Competency” and standardized IoT platforms. Centers of Competency allow businesses to pool and share expertise across organizational departments or business units. Standardized IoT platforms involve the application of common technology and tools across use cases. This minimizes the risk of distributing investments across too many subscale technology stacks.
A second commonality objective centers on the concept of interoperability. On one level, this means designing systems that can accommodate devices and applications from multiple vendors. Through this, companies retain the power of competitive choice.
Over the longer term, interoperability design principles mean that solution architects can build IoT applications that cut across operating silos and expose new application possibilities. This is particularly relevant for small- and medium-sized organizations that will need to embrace IoT systems that span industry supply chains, for example. Think of application interoperability as a way of capitalizing on economies of scope.
A holistic IoT strategy, in other words, looks ahead for design and operating commonalities across multiple applications. It avoids the compatibility, complexity and costs associated with stitching multiple applications together after they have been deployed.
Prepare for the future by planning beyond today’s priorities
In addition to solving immediate application challenges, organizations should invest some effort in looking ahead to a longer-term roadmap. What are the second- and third-generation application possibilities for initial success stories? Consider, for example, a service provider that is responsible for measuring traffic flows and congestion points in a city’s road network. It might seek to apply some of its data to help a waste management company optimize its garbage collection schedule. Or, it might work with environmental monitoring agencies to understand and eventually manage pollution hotspots.
Rather than focusing primarily on technology challenges and use case requirements, organizations need answers for governance and change management issues. Governance goes beyond the involvement of technology and operational departments. It needs to involve end users and a design approach that meaningfully represents their needs. Change management deals with the adoption of new systems and techniques that foster user adoption and support at the level of executives who are responsible for corporate strategy and funding decisions.
Organizations also need to work on a financial plan. Beyond the immediate costs involved in a pilot, businesses need to plan for growth and expansion-oriented costs. These may have investment horizons running into the decades, as in the case of infrastructure for the buildings, cities and mass-transportation sectors.
As companies explore these issues and investigate longer-term implications, they will also come across new revenue opportunities from service innovation and complementary business model changes.
The idea of embracing opportunities as they arise and with an organization’s core business areas won’t be enough. It leaves too many new and transformative opportunities unaddressed. To compete with a cutting edge, businesses have to be proactive and look further afield.
Traditional IoT deployments have consisted of networked instrumentation, but simply adding instrumentation does not make a smart city. Cities can always install something like a security camera and connect it back to a private data center, but unless the city is analyzing the data quickly and applying context to that data in the right way, then all it is doing is collecting and storing information. To be truly smart and to scale economically, that networked device needs to have compute resources either onboard or somewhere close by. Edge computing, in other words.
Edge computing is a better fit for IoT that’s positioned around a large geographical area, areas of low connectivity or areas that are relying on multiple sensors working together to deliver a complex picture. Any computing that’s done on the device means less computing that needs to be done in a centralized data center. It also means smaller workloads that need to be transmitted over scarce bandwidth. In this way, there are more resources free for mission-critical applications.
Cities are moving to the edge
Much of the use case for smart cities involves protecting citizen safety. This could mean detecting a crime in progress, alerting first responders to a house fire or directing emergency services to a citizen having a heart attack. Seconds matter in all of these scenarios — which is why edge computing is such a priority. Here’s an example: Let’s say a city installs a sensor that monitors air quality, and one day it detects an anomalous reading. It sends its readings to a data center hundreds of miles away. That data center then has to spend time processing the data and then has to spend time sending it back. Meanwhile, the gas leak that the sensor detected has turned into a roaring fire.
In a scenario where the air quality sensor is equipped with on- or near-device computing power, that fire doesn’t happen — or at least when it does, first responders are already on their way. The device uses its onboard computing power to conclude that a leak is occurring and then notifies the relevant authorities directly, without relying on backhaul.
Cities have recognized the potential impact of edge computing and a number of smart city projects have begun to incorporate it in a number of ways, including:
- Water quality: In New York City, a complicated and aging water system supplies millions of people. A recent Legionella outbreak affected an NYC hospital, but by using IoT sensors in the water supply, municipal workers were able to flood the connected pipes with elevated levels of chlorine, preventing the outbreak from spreading through the water supply.
- Municipal buses: Cities are working to make buses safer by installing recording systems that identify potentially violent incidents. It’s very hard to stream video and audio from a moving vehicle, so the IoT surveillance systems on municipal buses are provisioned with computing power that allows them to recognize and report problems.
- Highway tolls: Last but not least, almost every commuter on a toll road has been the beneficiary of an edge-powered IoT system. Connectivity is hard to come by on a busy highway, so license plate cameras take photos and conduct optical character recognition locally, then transmit to a central private cloud at night when there’s less internet traffic.
Edge computing makes sense in issues where urgency is of the essence and in situations where good connectivity is difficult to find. Cities have a lot of these areas, and the use cases above are just scratching the surface in terms of where edge computing can be applied. The only question is support — where should cities begin to implement edge computing in order to maximize outcomes?
How should cities build an edge computing infrastructure?
In concept, building out an edge computing infrastructure for smart cities isn’t hugely different from building out other IT services. Cities must define what they wish to achieve via IoT, define their priorities and support them accordingly. For instance:
- What is the goal of a smart city IoT project? Do you want to reduce accidental deaths, stop crime, monitor the quality of the environment, reduce traffic or something else?
- How important is the project? If it goes offline, will peoples’ lives be in danger? If so, is the danger immediate or long term?
- For mission-critical activities where lives are on the line, cities need to design redundancies into their systems and have support teams on standby to fix them when they break.
For some IoT services, such as highway tolls, uptime doesn’t matter as much. Cities want to achieve smart fare collection, but commuters won’t notice or suffer unduly if the system goes down for a while — and in the meantime, the license plate camera is still snapping photos, ready to upload them once connectivity comes back. The same can’t be said for something like a water-quality monitor. Here, it’s essential to have local computing resources on hand so that if a network connection goes down, city administrators still have access to real-time intelligence and analysis.
Additionally, analyzing the chemicals in the water isn’t enough. That analysis has to be done as close to real time as possible to prevent widespread contamination. Then, combining that data with numbers around flow volume and pressure, you can model the potential spread of a contaminant to be certain that you’ve flushed the water supply thoroughly. It’s all about combining data together in the right way and on a platform that can cope with the demands of the data analysis.
Based on our examples, there are three important elements where cities should focus their investments in IoT infrastructure: the core, the cloud and the edge. Edge computing may be more expensive as it involves purchasing instrumentation with added computing power for onboard analytics, but it’s useful for high-priority, low-bandwidth scenarios. A city’s private cloud has a little more latency and a little more computing power. It’s harder to get workloads there, but it’s easier to process them once they arrive.
Lastly, the public cloud has the highest latency and the highest amount of computing power. Data centers in the public cloud may be located in a completely different region, but their processing power is functionally infinite. This mixture of capabilities suggests a framework for smart cities to prioritize their workloads and prioritize their investments.
For a smart city project to work, administrators need to identify where their IoT systems can prevent life-or-death scenarios. They then need to determine where these situations may involve a combination of congested bandwidth and heavy workloads, such as image recognition or speech analysis. In these scenarios, taking compute resources out of the cloud and placing them close to instrumentation can have an immediate positive effect on citizens’ health and well-being.
The trajectory of data growth resulting from myriad devices — macro to micro — that capture or create data is beyond astounding. There’s more data today than yesterday; there will be more — much more — data tomorrow than today. The internet of things is the newest contributor to the massive volumes of data created daily. This surge has once again prompted the industry to evaluate data management strategies from the perspectives of scale, data gravity, integration and general security. Yesterday’s approaches to data management are no longer adequate in dealing with the volume, diversity and interconnectivity that characterize IoT. Scalable infrastructure and centralized management are required.
Infrastructure needs to scale easily and globally
Volumes of data generated by IoT are the initial shock wave with which IT organizations have to contend. Increasing data volumes have been a concern for years, while decreasing storage size and cost have made these increases manageable. But the sources, locations and speed at which IoT data is generated require rethinking the entire data lifecycle. Imagine how much data is generated and used by, say, an autonomous vehicle. Or consider how much data you generate in a day if you’ve enabled location services on your mobile device. With regard to data management, we’re once again asking the following questions:
- Can the existing network and communications infrastructure handle these volumes?
- Where are these data best managed — in the data center, in the cloud, at the digital edge or in all of these locations?
- Who should have access to these data — operations, maintenance, compliance, finance, external service providers?
- What are the retention requirements, operationally and legally, for these data?
- What is the expected data growth over the next three years?
These questions are both rhetorical and practical, and there’s a common thread in the answers to each: the need to easily scale your IT infrastructure to support data management and processing growth for years to come.
An ever-increasing number of enterprises respond to this need by moving to the cloud, which has proven financial and operational benefits in contrast to on-premises data centers. The challenge for many enterprises, however, is the number of cloud providers with which they need to work to support the variety of applications, operations, data and geographies in which they operate. Reduced costs for capital expenditures and operating expenses can lead to greater complexity when managing a diverse infrastructure.
IoT data gravity: With volume comes value
As the volume of IoT data increases in any one location, it acquires data gravity. In other words, as data volume grows, other applications or functions find value in that data. In turn, those applications usually increase that volume even further. An instrumented drill string on an offshore platform tracks depth, speed, angle, temperature, head pressure and other operational data. That’s all useful in managing that single downhole operation.
However, that data becomes even more valuable when combined with data from hundreds of other downhole operations. By analyzing that data, operators can predict and optimize the performance of drilling operations in similar environments or locations. When combined with geomorphic data, it could lead to more efficient exploratory techniques. Equipment manufacturers also benefit from operational data, especially when it indicates failure or suboptimal performance. Such insight leads to improved product design and better preventative maintenance schedules.
In this example, large data volumes provide greater insight, which will benefit operators, manufacturers and maintenance personnel. The greater the volume of data, the greater the inherent value of that data. Your IT infrastructure just needs to capture, manage and share these data securely.
IoT requires secure integration
Much of the value of IoT comes from the interconnectivity, whether wired or wireless, of devices, processors and storage at the physical level and myriad applications and services that transform bits into value. These data acquire greater value when shared securely with legitimately interested parties. A drilling equipment manufacturer gains significant benefit from analyzing operational data shared by its customers.
However, secure integration among the components and connections that comprise your IoT environment is a challenge. How do you efficiently connect, collect, exchange and manage data while maintaining security as data moves over an expanding IoT network?
The concept of interconnection — private data exchanges between businesses — addresses this challenge. It provides a secure nexus to integrate data sources and services at the digital edge to reduce latency and optimize performance. For many IoT operations, real-time processing is a baseline requirement when operational data is generated and needs to be analyzed on the factory floor, in the field or at a busy intersection.
Data security begins with encryption strategy
In addition to secure interconnections to protect data in motion, enterprises also need to focus on the security of data at rest. The threat of cyberattacks has expanded beyond personal and financial data, and now any enterprise with significant physical operations — manufacturing, utilities, transportation, city infrastructure, chemical and petroleum, pharmaceutical, telecommunications and more — must be concerned with the security of operational and intellectual data.
A successful breach of an operational data store combined with surreptitious modification by a knowledgeable hacker could disable a critical process. Organizations that depend upon intellectual property, whether it be research or proprietary processes, need to protect that information.
Data security requires a multifaceted approach, but at the most fundamental level, a sound data encryption strategy can be your strongest defense. In the context of IoT and the widely distributed nature of operations across multiple cloud environments, a centralized approach to encryption key management is needed. This will allow you to manage the entire lifecycle of encryption keys, regardless of where the keys are being used across multiple cloud providers.
Encryption key management needs to be delivered as a service. The traditional hardware security module (HSM) approach can’t readily provide scalability in multi-cloud environments. Encryption key management as a service also provides an added level of data security by managing key separately from encrypted data: Encrypted data is useless ciphertext without access to the encryption keys.
Scalable and reliable IoT
You can more efficiently address the issues of scalability, data gravity, integration and security via a global platform that enables you to deploy, directly connect and effectively scale your IoT infrastructure. Rather than attempt to build your IoT infrastructure piece by piece, opt to work with an organization that already provides a global footprint, access to IoT and related services from best-in-class providers, and access to the top clouds and networks. An HSM-as-a-service approach simplifies provisioning and control of encryption keys and provides cloud scalability, secure key storage, encryption and tokenization services at the digital edge of IoT operations.
Consider the humble car tire. This round treaded rubber tube is arguably the most important component of your automobile, particularly when it comes to performance and safety. It’s the only part of your vehicle that actually touches the ground — the tire lives, quite literally, where the rubber meets the road — and it has a number of very important jobs. A tire makes the car accelerate, turn and stop. Remove just one tire and the car isn’t going anywhere.
Tire companies are starting to sense that there might be some new jobs for their product. They’re now embedding sensors in tires to monitor tire pressure, thread wear and tire temperature. With three billion tires sold globally every year, think of how much information sensor-enabled tires could collect (fuel consumption, driver performance, road condition, distance travelled). Harvesting and analyzing this valuable information could position a tire company to be more informed about your driving history than your auto insurance company.
The business model opportunities here are vast: A smart tire company could provide services such as safe driver certification, car operator monitoring, accident report verification and road condition reporting. Smart tires could provide immediate customer benefit, improve service and efficiency, and enable a fundamental shift from a reactive to a preventive and proactive service model. Imagine a tire repair subscription service, one that remotely monitors wear and sends a technician out to replace a customer’s tires at his home. Looking at a possible next generation of this technology, the amount of on-the-ground mobile transmitters could provide a way to track and enhance driver safety. Future tires could communicate with autonomous vehicle control systems, sensing road and weather conditions and adapting to them.
But this is much bigger than tires. There are so many physical objects we interact with in our daily routine that could be a gateway to a larger service model. The technology in products is growing more sophisticated as is the ability of products to solve very specific user needs. These highly focused products will bring more attention to larger service models that either provide an unmet need or streamline a connection to the end user.
So, what’s your tire?
Here are some digital transformation identifiers to consider when determining if there’s a larger service model for the products your company produces. Is there a place for them in the connected digital ecosystem? Could your office chair monitor posture and wellness? Could your light switch monitor the inside of your walls? Let’s determine if connectivity could drive a new business model for your organization.
Identify the larger activity
Achieving and maintaining excellence at your product’s core activity is a lifetime of constant evolution and iteration … but this is not enough if you wish to use digital to expand into the service arena. To do so, your product must partner with its intended user and context to understand how it can enable a heightened relationship.
How can your product partner with customers to address unmet user needs? It’s important to identify the larger activity the user is engaged in when using your product and discover how to support the user’s overall goal — and not just address a singular aspect of it. For instance, Nest’s mission is “to create a home that takes care of the people inside it and the world around it,” which goes beyond selling single thermostat or home monitoring system.
Own the larger ecosystem
Once you’ve identified the larger activity, consider the collection of products and services that can be incorporated under the umbrella of your product’s service model. Does this ecosystem allow the freedom to adapt product offerings while staying true to your product’s core intent? If this is the case, you can provide multiple experiences for your customers by establishing a single platform to satisfy multiple customer needs. For example, Caterpillar has built a data ecosystem that helps its customers cut operating costs and increases productivity. The company is now extending this ecosystem to non-Caterpillar equipment, maintaining its core function as orchestrator of increased efficiency, productivity and safety in construction equipment.
Consolidate where possible
“Simplify, simplify,” wrote Henry David Thoreau in Walden, and he was on to something. You should cast off unneeded layers of interaction to achieve a more direct dialogue between your product and your customer. Don’t allow a third party to step in between your product’s service and your customer. Blockchain, for example, is a decentralized system that can quickly prove ownership of information, eliminating the need for third-party intermediaries and reducing overhead costs when people trade assets directly with each other.
Allow the customer to participate
Be transparent about the data you are collecting with your connected device and give the user access to this data. This will allow users to personalize and develop unique ways to interpret and employ the information. In turn, you’ll be able to use these insights to create offerings catered directly to your customers’ needs, which should result in a personalized delivery of service.
In 2018, it’s a very good idea to be as open as possible with the people you’re serving. Today’s consumers are well aware of the tradeoffs here. They understand, for instance, that in giving their data to Waze, they get the app’s superb and reliable navigation software — but they don’t feel like their trust is being abused. If you’re going to go the data collection route, be as open with the data donators as you can be.
Our current state of digital transformation is allowing us to reconfigure our relationship with the product and services that surround us. Understanding how your product can better engage with users and promote information sharing is paramount. You can now activate your product and strengthen important connections with the user to create more direct pathways of interaction. How can your product become less static, learning, adapting and evolving based on user behavior? The success of your product is its ability to customize itself to the user’s contexts and needs. Smart connected products require a rethinking of design to humanize technology and develop a stronger relationship with the people using them. Companies that can accelerate this transition from product to service will be able to establish business models they can own and control. This heightened connection with users could also make a huge positive impact for society. With great product positioning comes greater responsibility, and the objects of today need to act as the bridge for the product services of tomorrow.
Question is, are you ready to take a spin? At the very least, you should go and kick the tires.