It’s astounding. Time is fleeting. Look back just 15 years or so and my, how things have changed. What were the hot tech gadgets and trends in the first few years of the 21st century? The Nokia 6610 was the best-selling cellphone in 2002; it featured text messaging — its memory could store up to 75 texts! — but didn’t have a camera or email or internet browsing. Desktop computers far outnumbered laptops. Netflix was still DVDs-by-mail only. There was no Facebook or Twitter. Remember Myspace?
This was also the time in history when Cees Links, now considered a Wi-Fi pioneer, made some bold predictions about the future in his 160-page e-book, The Spirit of Wi-Fi, along with an explanation of how the first wireless LANs were created and how an interesting, 1998 meeting with Steve Jobs changed the face of wireless communications.
Since we are now living in a new wireless world, let’s take a look at how Links’ vision compares to reality as we know it now — and get a sneak peek into what’s coming next.
Prediction #1: Smart everything
Links predicted that business people would have four gadgets to help them in their day: a computer notebook with Wi-Fi and Bluetooth, a palmtop with Bluetooth, a cell phone with GPRS and Bluetooth, and a wireless headset with Bluetooth.
The results: Pretty darn close. Said Links, “I was pretty close, if you replace the palmtop with a tablet. Now you have a laptop to do real work, a tablet for convenient reading and checking the internet on the road, and a smartphone when really nonstationary.” What amazes Links the most is the demise of phone communication — more specifically, how much phone communication has been largely replaced by text messaging, successor apps like WhatsApp and group chats.
“The other remarkable thing was that in those years, we spoke a lot about videophone conversations becoming commonplace — and our concern about the amount of bandwidth they would require,” Links said. Today, videophone is (almost) free with chat communications, but selfies and Instagram posts are way more popular. It seems that some two-way conversations have been replaced with a series of one-way lobs of status communications.
Prediction #2: Cell phones and palmtops would merge
Links predicted that the number of devices would be reduced by merging the functionalities of a cell phone and palmtop.
The results: Yes, but… The palmtop and the phone did indeed integrate, and the tablet emerged. Though, as Links points out, many people may not know that the tablet, an Apple Newton, originally launched in the mid-1990s. But it never really caught on. “It was too early,” Links said, “and the proper data-communication standards and infrastructure didn’t exist yet. Also, the MCUs — the brains of the computer — weren’t powerful enough.” This required essentially another decade of development.
But Links said what he really missed was the need for a tablet: “Frankly, I was initially skeptical when tablets came out. Now I see the tablet slowly starting to take over from the laptop, in the same way that chatting is taking over from emailing. So, who knows? The days of the laptop may be numbered.”
Prediction #3: Smart watches
The results: Bingo. “Let’s not forget the watch,” Links said back in 2002, questioning if it could play a larger role in the world of technological devices beyond mere accessory or jewelry. Clearly, smart watches, like the Apple Watch, Samsung watches and Fitbit devices, have brought this reality to life.
“Honestly, I’m surprised I mentioned that a watch could be more than jewelry,” Links said today. “But indeed, the thought of making the watch more useful than for merely tracking date and time has always lingered, and it still does. I think the watch industry so far has successfully kept the electronic watch at bay for two reasons: First, a watch is still a piece of jewelry, and second, the battery life is still short.”
Fitness trackers are in a similar market position. They’re encroaching on the watch industry, but Links’ expectation is that they won’t be successful in destabilizing it, much like smart watches. “I wear a Fitbit,” Links said, “but one that is purely sensing, as a simple bracelet. I love my jewelry watch, but wearing two watches is a little pathetic. Plus, I would get totally annoyed if they weren’t indicating exactly the same time!”
New predictions: The future of Wi-Fi
Now it’s time to look forward and hear what Links thinks is in store for the future. After all, Wi-Fi was in the early stages of adoption when he wrote his book. At the time, Links described it as “a rich standard that would be with us for the coming decades and provide a solid basis for newer capabilities.”
Given that Wi Fi 6 (802.11ax) is expected this year, what do you think will come next, and what challenges will have to be solved in the future?
Links: Wi-Fi (IEEE 802.11) is indeed still with us and going strong — no end of life in sight! From 802.11b to a, g, n, ac, ad and now Wi-Fi 6 (802.11ax), high-performance wireless technologies have been evolving from the beginning of this century. Essentially, there have always been two major drivers: good coverage in your whole house or office, and faster speed. There have been other underlying drivers, like reducing heat consumption — to avoid your smartphone from melting — and integrating functions while reducing size and price. At this moment, the need for higher data rates, bandwidth and capacity will continue, without compromising the coverage. This is in line with the fact that video continues to be more and more important. Why should it take hours to download the latest series before going on a trip?
I recently wrote a white paper, “Wi-Fi 6 (802.11ax): What’s It All About?,” that discusses why higher speed, capacity and bandwidth are the key ingredients to success today. Everybody is connected on the same channel at the same time, and we always want more speed. There will be no rest for service providers!
Let’s talk about the smart home. There are multiple use cases that could be created to make the home smarter, like smart lighting, home security or lifestyle monitoring. You mentioned in your book that household applications will grow quickly once the infrastructure is in place. Is this the case today?
Links: Interestingly, the idea of low-power Wi-Fi was floating around a lot, and what we see today is that both Zigbee — which is essentially low-power Wi Fi — and Bluetooth Low Energy have established themselves; although frankly, it took longer than I expected. I think it took longer because the value proposition of Zigbee and BLE is more difficult to grasp because its value is its close connection to data management and processing, that requires a complete different way of thinking.
Normally, a company begins with the business case for a product, which drives the application space. But with the smart home, it’s the other way around — the application is driving the business case. This means it has taken longer to establish the value of the smart home, but it’s getting there, slowly but surely.
The challenge is still the infrastructure. Each application almost needs its own gateway connected to the router to have lights, sensors or smart meters connected to the internet, making implementation unnecessarily expensive. One of the larger steps forward are routers and set-top boxes with Zigbee and Bluetooth Low Energy integrated, and that’s what the industry is working on now.
I think the future of the smart home is distributed Wi-Fi, with a pod in every room serving as an access point. With Wi-Fi 6 and distributed Wi-Fi, consumers will have Wi-Fi everywhere in their home or office. Each pod can also carry wireless communication technologies, like Zigbee or Bluetooth. It will also allow command through voice activation and enable talking to the internet as a common feature in every room. This new infrastructure will help develop multiple use cases in the smart home — all using the same infrastructure.
Finally, how about some new predictions with regard to the evolution of Wi-Fi. Where do you think we’ll stand in another 15 years?
Links: There’s no shortage of demand for both higher data rates and longer battery life, so developments in this area will continue. Nowadays, I have to charge my laptop and my phone every day, which is a nuisance that I grudgingly accept. Data rates continue to be a bottleneck, but that probably needs to be extended toward system-level performance. My Wi-Fi is way faster than the cable internet link to my house, and sometimes sitting behind a very fast connection, imagining an instant reaction on mouse clicks and no waiting, makes it clear that the industry still needs to improve a lot in basic needs.
But even more exciting is the interaction between wireless connectivity and artificial intelligence. Being able to exchange data all the time — from sensors to work data to exploring thoughts and ideas for leisure or finding opportunities for relaxation and enjoyment — when it’s connected to proper guidance based from someone who “knows you” and can help, wouldn’t that be a dream we wish could come true?
Breakthroughs in human life have always come from technology inventions beyond imagination — cranes to help us lift things, wheels to move us faster than we can walk, writing to help us remember more than we can keep in our head, printing to share ideas wider and faster than we could imagine. And today, connectivity allows us to live in a healthier, more comfortable and more eco-friendly way — and to make better decisions, faster.
A connected world is a better world. Here’s to the next 15 years of Wi-Fi — no doubt it’s a future of great possibility!
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Although it’s hardly a secret, a steep rise in the number of connected devices around us is set to change the way we live, work and interact with technology. By 2025, forecasts indicate that there will be as many as 75 billion smart devices globally, introducing us to a new era of hyper-connectivity. These devices will not only collect data, but also produce and process information directly on the products closest to their users on the edge. Increased functionality and computing available on the edge is already changing the way companies design and build products, from intelligent construction site video surveillance to oil rig maintenance. In the following article, I will unpack how taking data processing out of the cloud and to the edge can positively impact reliability, privacy and latency.
What exactly is the edge?
Edge computing refers to applications, services and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics and the desired end-user experience.
While edge applications do not need to communicate with the cloud, they may still interact with servers and internet-based applications. Many of the most common edge devices feature physical sensors, such as temperature, lights and speakers, and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on a cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, companies can significantly reduce the volumes of data that must be moved and stored in the cloud, saving themselves time and money in the process.
The stakes are high
With edge computing set to change the way we live and work, it’s critical for businesses to understand what’s at stake for their business models, customer experiences and workforces. Edge computing impacts three dimensions: reliability, privacy and latency — each with profound implications for companies and consumers alike. Additionally, the convergence of edge computing and artificial intelligence is unlocking new opportunities for companies in 2020 and beyond.
A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in hard-to-reach environments. Many industrial and maintenance businesses simply cannot rely on internet connectivity for mission-critical applications. Wearables must also be resilient enough to perform without 4G. For these use cases and many more, offline reliability makes all the difference.
Protecting privacy is both a potential asset and a risk for businesses in a world where data breaches occur regularly. Consumers have become wary that their smart speakers — or the people behind them — are always listening and, rightfully, companies largely reliant on cloud technology have been scrutinized for what they know about users and what they do with that information.
Edge computing helps alleviate some of these concerns by bringing processing and collection into the environment(s) where the data is produced. The leading voice assistants on the market today, for example, systematically centralize, store and learn from every interaction end users have with them. Their records include raw audio data and the outputs of all algorithms involved, attached to logs of all actions taken by the assistant. The latest research and innovations also suggest that interactions are set to become significantly smoother and more relevant based on additional information about end users’ tastes, contacts, habits and so forth.
This creates a paradox for voice companies and beyond that rely on the cloud. For AI-powered voice assistants to be relevant and useful they must know more personal information about their users. Moving processing power to edge is the only way to offer the same level of performance without compromising on privacy.
In the simplest terms, latency refers to the time difference between an action and a response. You may have experienced latency when using a smartphone if you notice a slight delay in the time it takes to open an app after touching the icon on your screen. However, for many industrial use cases, there is more at risk than a poor user experience and making users wait. For manufacturing companies, mission-critical systems cannot afford the delay of sending information to off-site cloud databases. Cutting power to a machine split-seconds too late is the difference between avoiding and incurring physical damage.
When the computing is on the edge, latency just isn’t an issue. Customers and workers won’t have to wait while data is sent to and from a cloud server. Their maintenance reports, shipping lists or error logs are recorded and tracked in real time.
Local computing power becomes the norm
We are living in a centralized world, whether we think about it that way or not. Every time you turn on your mobile phone or open a SaaS application, you are essentially engaging with an interface that represents what is occurring on a cloud server. In his 2016 talk, “The End of Cloud Computing,” Andreessen Horowitz’s Peter Levine outlined a vision for the future of edge computing. “Your car is basically a data center on wheels. A drone is a data center on wings,” Levine quipped. Nearly three years later, Levine’s words couldn’t be more prophetic. With more and more applications capable of functioning in local environments due to innovations in edge computing, decentralization is becoming far more than just a trendy buzzword and companies and consumer are benefiting from improved reliability, privacy and latency among their IoT devices.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The data throughput from IoT devices will grow exponentially each year. Users rely on this data to inform decision-making, and more businesses are allowing their processes to be driven by IoT data. One salient example, which will be the focus of this article, is commercial real estate buildings; however, the information provided can be used for other applications as well.
Commercial real estate buildings have hundreds of pieces of equipment that require maintenance and other upkeep to make sure the asset functions as intended. With rooftop unit(s) in the penthouse mechanical room, domestic water pumps in the basement and air handler units in tenant spaces — installing sensors on each piece of equipment can be challenging. Traditional systems, such as building management systems, often wire sensors throughout the property — but the price tag for these systems can reach seven figures.
If you don’t want to wire every single sensor due to the cost and complexity, there are several questions to answer when evaluating IoT:
- What type of data do I need to capture for the pieces of equipment within my building?
- How critical are these systems? Do I need to control any of these pieces of equipment remotely?
- What network infrastructure is available in the building?
- How old is the building? What is the approximate height between floors? What is the thickness between concrete slabs?
- What is my overall implementation cost? Will there be any added subscription costs moving forward?
Based on the answers to the questions above, the infrastructure required for an IoT system can vary significantly. The most common building protocols between devices are BACnet MSTP/IP, Modbus RTU/TCP, digital outputs (dry or wet contacts), and analog outputs (4-20 mA, 0-10 V).
All of these protocols traditionally require a physical connection between the data logging equipment and each sensor. These data logging devices convert the analog output into a readable format respective to the sensor.
It is not always possible to achieve the desired economics with wired sensors across an entire building. We can use better, modern IoT systems that have the capability of reading common protocols by transmitting over a wireless communication medium. These wireless devices are using a version of a low-power wide area network (LPWAN). These technologies, generally battery operated, can send data on ad-hoc, intermittent and/or consistent time frequencies. This protocol is already in use in Europe and increasingly in North America and the rest of the world.
There are several LPWAN technologies, each with its pros and cons. Some of these include LTE-M (cellular), Narrowband-IoT (cellular), LoRaWAN and Sigfox. There are other options available, but these are the most common and currently available for deployment. Of the four, LoRaWAN is the most common worldwide. LTE-M and Narrowband-IoT (NB-IoT) are close seconds, but unlike LoRaWAN, the deployment is carrier-specific and not region-specific (more on this below).
You may be asking: For a wireless deployment, why not use Wi-Fi or Bluetooth? While these are viable options for some instances, they are not the preferred strategy for small amounts of data being transmitted across long distances and through walls.
The key difference here is that LPWAN technologies communicate in sub-GHz frequency. LPWAN can communicate through an entire building for the same reason that walkie-talkies and cordless phones in the 1990s had such a wide range. They communicated in a sub GHz range — in the United States about 900 MHz.
As the frequency decreases, the communication range between devices increases because the wireless signal generated by these devices has a higher chance to pass through floors and walls. Having said that, the amount of data the device can send decreases the further away it is from the communication recipient. The relationship between the two is a negative slope.
The advantage with LPWAN is that it minimizes the total time required to deploy an IoT sensor compared to a traditional physical connection between sensors. The cost of these technologies is also decreasing and will continue to do so over the next five to 10 years. The chart below summarizes all four technologies with their capabilities:
As seen above, each wireless technology has its pros and cons. However, as of today, the most promising long-range wireless technology is LoRaWAN, followed by LTE-M and NB-IoT. In the long run, LTE-M and NB-IoT are capable of taking higher market share, but are significantly affected by their ability to have a single provider cover global coverage.
For example, in the United States, AT&T and Verizon have adopted LTE-M, Sprint is currently deploying an LTE-M network, and T-Mobile has decided to ditch LTE-M and put resources into NB-IoT right away. Every carrier around the world is deploying at its own pace subject to regulations. There is no action plan for global coverage yet.
Unlike LTE-M and NB-IoT, LoRaWAN and Sigfox are carrier-free, which allows one to install wireless bridges in ad-hoc locations within buildings. Communication from these bridges can connect to existing building networks and/or cellular connectivity.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Believe it or not, the one-year anniversary of the date the General Data Protection Regulation went into effect is almost upon us. The May 25, 2018 date will live in infamy for many organizations — particularly those that scrambled to get their people, processes and technologies in order ahead of the GDPR deadline.
But now that we’re a year in, and the initial chaos has died down, it’s the perfect time to reflect upon how GDPR has impacted organizations over the past 12 months — and not just from a data privacy perspective, but from an overall risk standpoint. With this in mind, I’d like to discuss how the regulation has prompted organizations to take IoT security more seriously.
IoT visibility is central to GDPR compliance
IoT has exploded the attack surface, making complete visibility into all connected endpoints across all computing environments a major challenge for many IT security teams. In my first IoT Agenda post, I talked about how, for many of our enterprise customers, it’s not uncommon that we profile a third or more IP-enabled endpoints as IoT-type devices — but many IT security teams don’t even know these devices are on their networks.
If you don’t know an IoT device is on your network, how can you protect it? You can’t. Not only does this introduce significant enterprise risk, but, in the case of GDPR, failing to implement proper security controls and data privacy measures on IoT devices leaves you vulnerable to compliance fines and associated consequences, such as a damaged reputation and loss of customers.
Rather than risk noncompliance, many organizations are getting serious about IoT security. And it all starts with visibility.
Network infrastructure monitoring technology helps organizations unearth unknown networks and attached endpoints to gain complete network visibility into all assets residing across all computing environments, as well as data at rest, data in transit and data in process. Armed with this information, internal teams can answer important questions such as: What IoT devices are on corporate networks? What data is being held, where and why? Who’s accessing that data and is access appropriate? Most importantly, with an accurate understanding of the state of their network infrastructure, IT security teams can protect all IoT devices and the data they analyze and transmit as specified by GDPR.
Beyond understanding what IoT devices are on corporate networks, IT security teams must also know where they are, what they’re doing and who they’re communicating with to ensure proper security and compliance measures and to protect personally identifiable information. Helpful practices to consider include monitoring IP addresses on the network and how they’re moving, identifying potential leak paths and unauthorized communications to and from the internet, and detecting anomalous traffic and behavior.
The role of automated policy management
If knowing is half the battle; the other half comprises action. Once companies have a comprehensive understanding of the endpoints and data residing on their networks, they can develop “zones of control,” bringing each under the right network policies and access rules.
Automated policy orchestration tools help companies achieve continuous security and compliance with regulations, such as GDPR, because they enforce appropriate access policies, rules and configurations on all assets, regardless of how they change or move. Additionally, in the event of noncompliance, policy orchestration technology makes it easier for IT security teams to identify where the violation occurred — a capability that comes in especially handy if an organization needs to meet GDPR’s 72-hour breach notification deadline.
Prioritizing IoT security
To achieve IoT compliance in a GDPR world, organizations must have real-time visibility across all of their networks, devices, endpoints and data. They must be able to immediately detect any suspicious network behavior or compliance gaps. And they must automate response so they can quickly remediate security and compliance violations. Network infrastructure monitoring technology and automated policy management, along with the above tips, are a good start to achieving not only GDPR compliance, but a stronger security posture. And now that most companies have mastered the basic building blocks of GDPR, hopefully we’ll continue to see IoT security and compliance become a greater priority over the coming year.
Back in 2016, predictive maintenance was forecast to be one of the most promising uses of industrial IoT. It seemed like a no-brainer: Who wouldn’t want better information to prevent equipment failure?
So it’s somewhat surprising that predictive maintenance has failed to take off as broadly as expected. A recent Bain survey of more than 600 executives found that industrial customers were less enthused about the potential of predictive maintenance in 2018 than they were two years earlier. In our conversations with buyers, we heard that implementing predictive maintenance systems has been more difficult than anticipated, and it has proven more challenging to extract valuable insights from the data.
Predictive maintenance is just one of many IoT use cases that customers have had difficulty integrating into their existing operational technology and IT systems. As companies in the industrial sector have invested in more proofs of concept, many have found IoT implementation more challenging than they anticipated.
Because of this, we find that customer expectations, while still bullish for the long term, dampened slightly for the next few years (see Figure 1). Our 2018 survey found that buyers of industrial IoT services and equipment expect implementation to take longer than they thought it would back in 2016.
Bain’s 2018 survey also found that among industrial customers, concerns over integration issues — in particular, technical expertise, data portability and transition risks — have become more acute over the past two years (see Figure 2).
- In 2016, customers were most concerned about security, returns on investment and the difficulty of integrating IoT systems with existing IT and operational technology.
- In 2018, security and integration were still top concerns, indicating that tech vendors haven’t made much progress in addressing them.
- Fewer customers are concerned about ROI than in 2016, perhaps because they have been satisfied by the returns on their early implementations. Industrial IoT use cases are beginning to deliver vendors’ promises.
Customers are increasingly worried about issues that arise during implementation: technical expertise, difficulties in porting data across different formats, and the transition risks. Proofs of concept have revealed these challenges, and companies now realize that although the effort pays off, the devil is in the details.
Despite these barriers, industrial IoT remains a promising opportunity. Bain’s research indicates that the industrial portion of IoT — including software, hardware and system solutions in the manufacturing, infrastructure, building and utilities sectors — continues to grow rapidly, and could double in size to more than $200 billion by 2021 (see Figure 3).
To capture that opportunity, device makers and other vendors of industrial and operational technology need to dramatically improve their software capabilities — not a historical strength for most of them. Leaders are investing heavily in acquisitions to obtain the necessary capabilities and talent. Most of this M&A activity targets companies further up the technology stack — in the realms of software and systems integration — than the core capabilities of industrial companies.
As vendors and manufacturers work to build scale, four groups of actions can help position them for long-term success:
- Concentrate your bets. Focus on select use cases and tackle the key barriers to adoption: security, ROI and integration with IT and operational technology. Learn from proofs of concept and develop repeatable playbooks. Package IoT solutions into scalable products that you then can roll out to customers.
- Find good partners. Acknowledge your capability gaps and find partners to address them. Work closely with cloud service providers, analytics vendors or enterprise IT vendors. At the same time, avoid broad and unwieldy alliances with too many players; partnerships tend to be more effective with a selective approach based on the use case.
- Understand it may take a while to break even. Building capabilities and forging strong partnerships takes time, so commit to long investment periods. Approach the effort with a realistic view of the funding, timeline and staffing changes needed to deliver results.
- Identify new talent. Your best employees excel at their jobs, but new operating models may require different skills. Learn to identify, hire and retain the entrepreneurial talent to thrive in your evolving business model.
Finally, companies will need to be clear on where IoT fits into their operating model. Some executives worry about new products cannibalizing existing products and their revenue. Companies need to allow internal entrepreneurs to build new lines of business without alienating the rest of the organization.
This article was cowritten with Peter Bowen, Christopher Schorling and Oliver Straehle, partners with Bain’s Global Technology practice in Chicago, Frankfurt and Zurich, respectively.
Not long ago, many IT leaders viewed IoT as little more than an interesting science project. Today, companies in every industry rely on IoT insights as part of their core business strategies. According to DigiCert’s recent “State of IoT Security Survey 2018” (registration required), 92% of companies expect IoT to be important to their business by 2020. In all, analysts project the global IoT market to more than double by 2021, reaching about $520 billion.
That’s a whole lot of new devices popping up on the world’s networks. And there’s one group that can’t wait to get their hands on them: cybercriminals. With nearly 10 billion IoT devices forecasted to come online by 2020, attackers see billions of new potential attack vectors. The fact that many connected devices still ship with inadequate security makes them even more attractive targets.
“Businesses are bringing insecure devices into their networks and then failing to update the software,” said Vik Patel in a recent conversation with Forbes. “Failing to apply security patches is not a new phenomenon, but insecure IoT devices with a connection to the open Internet are a disaster waiting to happen.”
For some companies, the disaster is already here. According to the DigiCert survey, among organizations struggling to master IoT security, 100% experienced a security mishap — IoT-based denial-of-service attacks, unauthorized devices access, data breaches, IoT-based malware — in the past two years. Those issues can carry a big price tag. A quarter of struggling organizations reported $34 million or more in incurred costs from IoT security mishaps.
Fortunately, the IoT security problem is far from intractable; there are mature, proven strategies that organizations can employ to secure connected devices. But the key is to take those steps before a vulnerability or breach is identified instead of trying to retrofit devices after the fact. The most successful organizations employ a security-by-design approach using public key infrastructure (PKI) and digital certificates. Using PKI to reinforce the security basics — authentication, encryption, data and system integrity — you can keep your IoT footprint ahead of the threat.
Based on the same PKI standard that millions of websites rely on every day for secure connectivity, PKI provides an ideal framework for mutual trust and authentication in IoT. In addition to encrypting sensitive traffic, PKI verifies that IoT devices — and any users, devices or systems communicating with them — are who they claim to be. When all parties to IoT communications have a trusted digital certificate vouching for their legitimacy, it becomes much harder for malicious actors to, for example, hijack a device or inject malware into its firmware.
PKI is a perfect match for the exploding IoT sector, as it can provide trust and control at massive scales in a way that traditional authentication methods, like tokens and passwords, can’t. PKI provides:
- Strong data protection: PKI can encrypt all data transmitted to and from IoT devices, so that even if a device is compromised, attackers can’t do anything with the data.
- Minimal user interaction: With digital certificates, PKI authenticates users and devices behind the scenes, automatically — without the interruptions or user interaction required by passwords and token policies. Certificates also provide stronger identity by including information such as the device serial number.
- Secure code: Using code signing certificates, companies can sign all code on the device firmware, assuring only trusted code can operate on the device. This protects against malware and supports secure over-the-air updates to the device.
- Effortless scalability: Originally designed for huge networks and web services with vast numbers of users, PKI can easily scale to millions of IoT devices.
Companies typically choose from two options for deploying PKI: implementing and operating their own private PKI framework on-premises, or using hosted PKI services from a public certificate authority. Which approach is right for your organization? Let’s evaluate the four C’s.
The four C’s of PKI
Consideration #1: Control
How much control do you need over your certificate infrastructure? It often depends on your industry. In heavily regulated industries with complex, rigorous compliance requirements, many companies keep everything in-house. This does provide fine-grained control and comprehensive auditing capabilities. But it also requires significant time, money and expertise. Someone in the company must “own” the process of ensuring the framework adheres to industry standards, enforcing policies to establish trusted roles, managing key ceremonies and data storage policies, ensuring reliable certificate renewals and revocations, and much more. The resources required to do PKI in-house right — and the potential to do it wrong and cause significant damage — is often more than a company wants to take on.
This is not a small effort, and it’s not for amateurs. This is why many companies in less-regulated industries, and even many in regulated ones, prefer a hosted solution, letting a public certificate authorities handle all the complexity. If you need the control of an on-premises system but you don’t want the management headaches, some PKI providers offer hybrid models. These combine on-premises systems that can issue publicly trusted certificates through a secure gateway that communicates directly with a scalable cloud issuance platform.
Consideration #2: Cost
When you deploy and manage your own private PKI framework, you can build exactly the system you want. But don’t expect it to come cheap. Standing up an internal certificate authority entails initial hardware and software acquisition, and often extensive investments in training and personnel.
Beyond the initial implementation, expect to devote ongoing resources to maintaining the on-premises PKI framework: keeping up with audits, tracking evolving industry standards, updating hardware and software, as well as ensuring device integrity throughout the lifecycle. The total cost of ownership can be significant. This is why most companies that have the option choose hosted PKI offerings with more manageable, predictable economics.
Consideration #3: Crypto-agility
If your PKI is going to actually protect your IoT footprint — and your stakeholders’ or customers’ data — it needs to use up-to-date cryptography. That doesn’t happen automatically. Whoever owns the PKI framework needs to monitor and participate in standards groups to stay ahead of changing threats and implement continually evolving protocols. If you’re operating your own on-premises certificate authority, make sure you build that ongoing effort into your PKI budget.
Here again, companies across the board increasingly opt for cloud-hosted PKI. When standards shift or cryptographic properties change, a hosted PKI provider — whose core business entails investing in PKI staff and architecture — is ready for it. The leading public certificate authorities typically anticipate changes to curves, algorithms and hashes well before they are widely known or implemented. Getting ahead of quantum computing threats to today’s encryption algorithms looks to be the next frontier.
Consideration #4: Certificate Management
Managing the full lifecycle of certificates across a large volume of devices — even millions or billions of them — is not an easy task to run in-house. This requires a technology stack and strong policies and procedures to issue, install, renew and revoke certificates. Many vendors look to a trusted third party with automated offerings to discover and manage certificates, and especially one with a track record of having already provided certificate-based authentication for billions of connected devices.
Don’t put off IoT security
The days when you could launch an IoT initiative without a sound strategy to authenticate devices and ensure data and system integrity are over. IoT is very much on the radar of cybercriminals. And the costs of reactive, after-the-fact security can easily climb to tens of millions of dollars. On the flip side, those companies that do IoT security right can reap major benefits. Bain & Company found that enterprises would buy more IoT devices — and pay up to 22% more for them, on average — if they were more confident that they were secure.
Before launching any new IoT application, make sure you’re building standards-based PKI security and authentication into the basic design of your architecture. Whether you manage certificates yourself or work with a hosted certificate authority, you’ll sleep better knowing your IoT footprint can’t be easily compromised. And your business will be able to capitalize on the full power and potential of IoT.
For the last decade, we have seen an interest expand to an obsession: grab the data, store the data, keep the data. The software industry saw an opportunity to capitalize on this obsession, leading to an explosion of big data open source technologies, like Hadoop, as well as proprietary storage platforms advertising their value as “data lakes,” “enterprise data hubs” and more. In a growing number of industries, the goal has been achieved: Ensure you have as much data as possible and keep it for as long as possible.
Data is the new oil, but mining for value requires lots of pipes
Now comes the next phase of any hype cycle: reality. Data is indeed the new oil or the new gas, but none of this matters if value cannot be mined from the data. The oil and gas industry has an advantage. In each identified location, an oil well is created by drilling a long hole into the earth and a single steel pipe (casing) is placed in the hole, allowing the oil to be extracted. When the oil is extracted, it is processed and then brought to market. No integration with other oil repositories is necessary. Unfortunately, this is not the case when drilling for business value in individual data lakes and data hubs.
The manufacturing industry, and specifically manufacturing plants, is one of the most complex examples of how data is collected but is limited in value. Each plant collects its own data and, in some cases, stores that data in public or private clouds. The plant can (sometimes) use that data to optimize its own environment, understand what is happening and maybe even predict what is going to happen. But what about the trends, the insights and the continuous improvement practices that could benefit the multiple and widely distributed manufacturing plants for a large enterprise? What about the optimization between manufacturing, inventory management, supply chain and distribution? All of these groups have their own data, but a single pipe can’t reach it.
Beware of centralizing the data
So, what is the solution? Some would say that it’s critical to centralize the data, to ensure that it is co-located in a single public cloud object store or a centralized data warehouse. But the 1980s are well behind us. Nevertheless, this approach is gaining some attention in the market from some of the cloud vendors and cloud-specific systems. They have a great motivation to reach for the data because of a newer and very dangerous term: data egress. Getting data into a central location is not easy, but it is doable. Getting data out of a single cloud or solution provider is very, very difficult and expensive because once the data is within a single environment, the vendor has control. The reality of distributed data is what we have to address, and this requires a completely different approach. The new reality is bringing the analytics to the data where it resides and in what format it needs to be but ensuring that this does not result in a tangled mess of pipes.
Deriving business value with analytics
Successful industry disruptors focus on the business value derived from the analytical insights from their data, not simply the data collection. They each start and achieve an end goal in mind with a unified analytics platform that respects the data format and the data location, and applies a consistent and advanced set of analytical functions without demanding unnecessary, expensive and time-consuming data movement. A unified analytics platform is also open to integration within a broader ecosystem of applications, ETL tools, open source innovation and, perhaps most importantly, security and encryption technologies. On top of it all, a unified analytics platform delivers the performance needed for the scale of data that is the new normal in today’s world.
The hype cycle of data lakes is over, and the reality and the risk of data swamps are real. Combined with the confusion and uncertainty regarding the future of Hadoop, the time is now to architect — or rearchitect. And it’s imperative to start with the right end goal in mind: how to mine the data in a unified, protected and location-independent way without creating delays that undermine the business outcome.
Over the last few years, two acronyms that offer a vision of the future have become ubiquitous across the technology and communications industries: IoT and 5G. IoT is a broad term describing a future in which much of the electronic communications will be between autonomous devices. 5G is the fifth generation of mobile wireless. Let’s look at how the 5G radio access network (RAN) will support IoT.
IoT envisions communications between billions of devices. Although previous generations of mobile technology have provided some capability for machine-type communications, like meter reading and asset monitoring, these capabilities have either been designed as “over the top” custom applications or they have been built into 4G standards as an afterthought — think Narrowband-IoT and LTE-M, for example. 5G is the first standard to support machine communications from the beginning; the standard supports massive machine-type communications and ensures that the RAN will meet these needs.
Beyond changes to the standard, however, serving broad-based IoT requirements leads to additional considerations when designing the 5G RAN. Users will have high expectations that there will be sufficient coverage to deliver service to IoT devices anywhere they are installed, whether inside buildings or in the outside environment.
5G networks are being designed around three core application models:
- Speed — Enhanced mobility broadband
- IoT — Mass device deployments
- Ultra-low latency applications
How does the 5G RAN meet these challenges?
5G networks are being designed to be almost 10 times faster than 4G technology, so they support a far wider range of applications.
5G supports 10 times as many connections per square kilometer, which is important because there will be billions of IoT devices to connect. Support for more connections translates to less equipment in the network, smoother deployments and faster deployment times.
In addition, the 5G RAN will extend to both indoor and outdoor radio sites. We will need coverage in buildings and factories, so there will be a mix of indoor and outdoor network equipment. In-building wireless, including small cells and distributed antenna systems, will drive RAN signals into buildings, while outdoor applications will be supported with everything from macro cell towers to small cells.
Support of services that require low or ultra-low latency can be achieved by optimizing the location/distribution of the baseband processing elements in the radio access network. This is supported within 5G standards by moving time-sensitive elements of baseband processing closer to the network edge.
In 5G, the baseband elements are broken down into a centralized unit (CU) for the non-real-time functions and a distributed unit (DU) for the real-time functions. To achieve minimum latency in the network, the DU and/or the CU are moved close to the network edge, typically to the radio access node or to a hub location.
Another major change in 5G architecture is the amount of virtualization in the RAN. A lot of the traditional centralized components will become virtualized and run on server platforms, which is ideal for IoT because it provides a common data center architecture that houses both data center resources and a piece of the RAN.
Look back on how computer virtualization changed IT architecture a decade ago by using underutilized compute resources among multiple devices. We are seeing the same shift in the RAN network, where a baseband unit’s capabilities are being shared between multiple cell sites.
Virtualized RAN components will be less expensive because they run as software on standard servers rather than the proprietary, hardware-specific devices used in the past. There will be variations depending on what the IoT network is trying to do; for example, low-latency applications will require that some of the RAN components are located closer to the end devices. Remote surgery and autonomous driving are use cases.
So, in terms of raw capabilities in the standard, a denser network and virtualization, the 5G RAN will support IoT applications with higher speeds, lower latency and greater reach. 5G will be the first cellular standard that satisfies IoT’s huge demands for connectivity.
The United Nations Sustainable Development Goals eight and nine are important in the context of Industry 4.0 and industrial IoT. SDG-8 calls for decent work and economic growth, while SDG-9 calls for innovation in industry and infrastructure. The purpose of the SDGs is to improve social conditions and advance humanity. AI plays a critical role in accomplishing this. For instance, let’s look at the innovation that’s happening in the Industry 4.0 space and where AI systems are proving efficient in preventing human errors and improving efficiency. The case studies from early AI systems clearly demonstrate that AI can not only improve efficiency metrics, like yield and throughput, but it can also reduce material waste and harmful emissions. In these scenarios, AI will create a net gain for us as society, improving human conditions.
AI can transform humanity by giving time back to humans to focus on more productive tasks. There are new skills to be learned and it is clear that specific types of work will be displaced by new ones. For the sake of this article, let’s assume that we are able to empower our current factory workers with new skills that make them relevant and productive in the age of AI. If we do that, are we all set? Is that the only societal challenge we have for realizing the potential of AI completely?
In an ideal world, there are AI systems working seamlessly with humans to create factories of future that are lean, efficient and environmentally friendly. But we are far from that ideal world for two reasons: the current infrastructure present in industrial setting to collect and provide accurate data, and algorithmic biases.
There are different ways of architecting AI systems. The most common way is to model the behavior of the world through data and make decisions based on the realized model of the world. As you can see, this is problematic. What if the data is not accurate? What if we don’t have enough data? What if our data only partially captures the world we want to model?
With the last surge of industrial IoT revolution, there was a surge of data available in factories. This opens the door to applying AI to factory operations. The challenge, however, is that the data is not ideal in several ways. Data collection processes were never optimized for a future AI application, rather they were built for simple responsive actions and decision-making. This shows up when the data is used to create machine learning models for building smart automation or predictive maintenance tools. Some problems with data can include incorrect sample rate, compressed or lossy data, incorrect sensor readings through faulty sensors or mechanical degradation, and so forth.
Algorithmic bias in AI, simply put, is a phenomenon where an AI deployment has a systematic error causing it to draw improper conclusions. This systematic error can creep in either because the data used to model and train the AI system is faulty, or because the engineers who created the algorithms had an incomplete or biased understanding of the world.
There have been several articles published about the human bias contributing to biased AI systems. There is well-documented evidence of AI systems showing biases in terms of political preferences, racial profiling and gender discrimination. However, in the context of Industry 4.0 applications, they are as big of a problem as data bias.
Going back to the SDG goals discussed above, we should aspire to improve the human conditions by providing people meaningful work. Let’s take an example of Ernesto Miguel, who has worked at a cement factory as a plant operator for the last 30 years. Ernesto spends most of his time ensuring the equipment under his watch functions efficiently. Over the last three decades, he has formed an intimate bond with the machines in his factory. He developed extraordinary abilities to predict what might be wrong with a machine by hearing the sound it makes. He can do more, like training more workers to be intuitive like him. He wants to share his expertise, but unfortunately Ernesto spends most of his time reacting to equipment problems and preventing failures. This is a problem ripe for AI.
We deployed one of our AI systems to model a crucial piece of plant equipment — a cooler — in a cement factory. The idea was to learn how adequately we could model equipment behavior by looking at two years’ worth of time series data. The data provided a great deal of insight into how the cooler was operating. Using the data, our engineers were able to identify correlation between different inputs to the equipment and its corresponding operating conditions.
If this worked flawlessly, we would accomplish two goals: use smart AI systems that could keep the equipment functioning in an optimum way and allow Ernesto to focus on more meaningful work, such as effectively training other factory workers.
Bias creeps in inadvertently when AI system designers confuse data with knowledge.
It was a big moment when the first AI system was deployed in the cement plant. We don’t yet live in a world where we can trust machines completely, and for good reason. So, there was a safety switch included for the plant operator to intervene if something went wrong. The first exercise was to run the software overnight, where the AI system monitored the cooler and was responsible for keeping it within safe bounds. To the delight of everyone, the system successfully ran overnight. But that joy was short-lived when the first weaknesses in the model started appearing.
The cooler temperature was increasing. And the model with an established correlation between the temperature and fan speed kept increasing the fan speed. In the meantime, the back grate pressure rose above the safe value. But the model identified no correlation between the back grate pressure and the temperature and felt no need to adjust the back grate pressure in its objective of bringing down the cooler temperature. The plant operator overrode the control and shut off the AI model.
An experienced plant control would have immediately responded to the increasing back grate pressure as it is detrimental to the cooler’s operation. How did the AI model miss this?
In his 30 years, Ernesto never had to wait for the grate pressure to build up before reacting. He just knew when the pressure would build up and proactively controlled the parameters to ensure that the grate pressure would never cross a safe bound. By merely looking at the data, there was no way for the AI engineers to determine this. The data alone without context would tell you that the grate pressure would never be a problem.
Bias hurts AI systems in many ways. The biggest of all is that it takes trust away from these systems. On top of watching his workers and equipment, Ernesto will have to watch the AI models. He has to teach the system to do things differently, which the system then has to learn. The next versions will improve. This will always be a problem when we model AI systems purely from incomplete or inaccurate data. In industrial IoT settings, this will always be the case because data will be inaccurate or incomplete.
As technology builders, what does this mean for us? How do we realize the full potential of industrial AI systems? The answer lies in us starting to design these systems with empathy and taking a thoughtful approach:
- We cannot assume that data is a complete representation of the environment we are aspiring to model.
- We need to spend time doing contextual inquiry — a semi-structured interview guided by questions, observations and follow-up questions while people work in their own environments — to understand the life of the workers who we are trying to empower AI systems with.
- We need to assess all the possible scenarios that could occur in the problem we are trying to solve.
- We need to always start with a semi-autonomous system and only transition to fully autonomous system when we are confident of its performance in production environments.
- We should continually adapt and train models to learn about the environment we are operating in.
Bringing AI into factory settings is more than just technology. It is about people. It is also about doing something with empathy and understanding the people whose lives the technology is going to touch.
It’s one thing to recognize that workloads are becoming more distributed and ushering in new opportunities. It’s entirely different to understand how distributed workloads can effectively be monetized at the edge.
Examples of edge monetization include a major medical equipment provider that has deployed thousands of edge locations within hospitals. Medical information along with machine sensor data is anonymized and transmitted to the cloud, where information across all hospital deployments is aggregated and analyzed to improve diagnostics, equipment performance and uptime. Their IoT deployment is part of a larger architecture that includes containers, machine learning and cloud processing.
Another example is a leading automobile manufacturer that is treating each autonomous driving test vehicle as a digital mobile edge that generates terabytes of data per vehicle per day. Each vehicle is part of a larger system that is constantly learning, gaining intelligence across all events and pushing the collective intelligence back out to the edge. This application focus is very similar to leading energy companies that have deployed real-time drilling applications that adjust to pursue the most optimum drill path while monitoring to prevent breakage and downtime.
The key to these applications — and monetizing the edge in general — is to understand how to coordinate each edge location as part of a larger whole. This requires collecting data from each edge to see the global picture. Analytics cannot be simply descriptive to report on historical events at the edge. The analytics should also be used to gain intelligence about events that have happened to better predict future events, such as equipment failures. However, the most significant analytics are prescriptive analytics that inject intelligence at the edge to respond in real time. These are the foundation to game-changing edge applications such as autonomous driving and real-time drilling.
Monetizing the edge is dependent on a persistent data fabric. A data fabric encompasses many different data types, files, tables, streams, videos and so forth. A fabric can also parse event streams that can act as digital threads for advanced AI. New models can replay these streams or threads, making it easy to compare new to existing models, speed burn-in time and increase accuracy. This is the important layer that can process and collect interesting event data to learn globally and act locally.
So, as you look at IoT at the edge, keep in mind that it’s not just a one-way path of sensor data that is collected centrally. A larger system of data flows to and from and across edge devices is required. And to fully monetize, you will eventually need to embrace AI, cloud and container technology.