IoT Agenda

May 21, 2019  2:05 PM

The future of IoT requires a cybersecurity standard

Sameer Dixit Profile: Sameer Dixit
cybersecurity standards, Internet of Things, iot, IoT cybersecurity, IoT devices, IoT regulations, iot security, IoT standards, securing IoT, security standards

The IoT revolution is in full swing. Smart watches, smart speakers, connected refrigerators and AI thermostats and doorbells are just the tip of the iceberg in terms of IoT, though. Legacy infrastructure in industries, such as transportation, manufacturing, utilities and logistics, is also being upgraded with internet connectivity. As the entire world around us gets swallowed up by the IoT ecosystem, there’s a critical need for cybersecurity standards to ensure the devices we rely on are secure and our privacy is protected.

Explosion of IoT

Gartner estimated that we will have 21 billion IoT devices by 2020. Some estimates suggest it could be more than double that at 50 billion IoT devices. Regardless of which estimate is more accurate — or if you just split the difference and assume the number is around 35 billion — it’s a staggering number of devices. That’s about five connected IoT devices for every single one of the 7 billion-plus people on planet Earth.

IoT devices will continue to skyrocket exponentially in terms of both volume and diversity — especially as 5G networks become mainstream. Wireless network speeds 50 to 100 times faster than 4G LTE networks will result in 5G networks becoming a primary network that competes with or replaces Wi-Fi networks for many businesses and consumers.

Every one of those devices expands the overall attack surface and provides an opportunity for hackers to exploit vulnerabilities, compromise network resources, or steal or expose sensitive information. Unfortunately, the vast majority of devices that are created will focus on performance and/or cost at the expense of security — or simply ignore the issue of cybersecurity altogether.

Challenges of IoT security

By definition, each IoT device is connected to a network in some way. It runs some sort of operating system — no matter how rudimentary — and most contain some sort of sensor and an ability to collect and transmit data. The fact that these devices are capable of executing code means they are also capable of being hacked and compromised.

It is crucial to encourage those designing and developing IoT devices to shift security left. Cybersecurity should be woven into the supply chain and development process rather than tacked on as a post-deployment afterthought.

The need for IoT cybersecurity standards

The proliferation of IoT devices — particularly low-cost IoT devices — lowers the bar for deploying IoT anywhere and everywhere, but organizations need to consider security implications as well. The overwhelming volume of devices make it virtually impossible for a company or consumer to be able to effectively assess the security controls on their own to make an informed purchasing decision.

Businesses and consumers need to be able to easily identify IoT devices that meet minimum acceptable security standards. A standard for certifying the cybersecurity of IoT devices accomplishes both goals — providing an incentive for developers and manufacturers to strive for and providing customers with a simple way to determine which devices are secure.

Establishing IoT cybersecurity standards

Thankfully, there are IoT cybersecurity standards being developed, such as the CTIA Cybersecurity Certification Program for Cellular Connected Internet of Things Devices. CTIA represents the U.S. wireless communications industry and its members include a cross-section of wireless providers, equipment manufacturers, app developers and content creators.

The CTIA IoT cybersecurity standard strives to raise the bar on the minimum acceptable security design for IoT devices. CTIA is implementing the standard using a tiered approach, with a set of minimum criteria defined to achieve each level. At a minimum, IoT devices must have password management, access controls, an ability to install software updates and a patch management process to achieve Level 1 certification. For Level 2 certification, devices must also include things like multifactor authentication, remote deactivation and the ability to uniquely identify itself. Level 3 — the highest level defined for the IoT cybersecurity standard — adds encryption of data at rest and evidence of tampering, among other things.

Giving the IoT cybersecurity standard some teeth

Creating an IoT cybersecurity standard is a great start, but standards only have value if they are adopted and enforced. With broad enough consensus, a standard becomes self-perpetuating. As businesses and consumers accept and expect devices to pass a given standard, companies must adhere to the standard or their products will not be purchased.

It takes time to achieve momentum like that, though. In the meantime, there must be some means of enforcing the standard on a smaller scale or placing some consequences on organizations that ignore the standard. The wireless industry is an excellent starting point because many IoT devices are designed to connect to wireless carrier networks. As 5G networks roll out and become mainstream, it will essentially be a requirement.

In order for a device to attach to a carrier network, it must pass PTCRB certification — a framework established in 1997 by leading wireless operators to ensure compliance with global industry standards for wireless cellular devices. The CTIA Cybersecurity Certification Program for Cellular Connected Internet of Things Devices is offered as an additional voluntary certification. A few carriers require CTIA certification for devices connecting to their networks, which will hopefully drive broader adoption and enforcement eventually throughout the industry.

The future of IoT cybersecurity

Developing and enforcing an IoT cybersecurity standard will not magically make everything secure overnight. There are millions of devices already on the market and already connected to networks around the world that did not have to meet any IoT cybersecurity standards and there are no plans to just disable them all or kick them all off of the internet.

Better security will come through continued efforts. Influencing developers and device manufacturers to move security left in the process and implement secure design and cybersecurity best practices by default is a step in the right direction. Ultimately, though, security is a moving target and ensuring better security for IoT devices will be an ongoing mission that will require focus and cooperation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 21, 2019  12:00 PM

Four more tips for a successful IoT product development partnership

Mitch Maimam Profile: Mitch Maimam
Internet of Things, iot, IoT devices, IoT partners, IoT partnerships, iot security, IoT strategy, IoT wireless, product development, Service providers

In a 2017 article, I offered seven tips to ensure a successful IoT product development partnership. Nearly two years have transpired, and while the old tips are still relevant, here are four more insights on the topic gained from our years of experience designing IoT products.

Security concerns are crucial

IoT products are often deployed at the network edge. Because these are often purpose-driven devices, some product companies pay little heed to security-related concerns. In the hands of end customers, these devices can become points of entry for hackers and other unauthorized users. There are many strategies to address these security concerns — or to at least set up reasonable barriers to entry. For companies that are working with a product development service provider, it is crucial that they include security in their statement of work. Also ensure that the service provider is fully conversant in the issues and has the expertise to implement remedial actions in order to mitigate security-related risks. If uncertain of the security strategies to apply, companies should engage with their development partner in the first phase of the project to help define an effective, tactical implementation plan.

Wireless standards are in a state of rapid flux

Not everyone is aware of all the wireless communications technologies and strategies available both now and on the horizon. Likewise, there are important tradeoffs to be made in selection of the right strategy for the IoT product. The most common ones in recent years have been Wi-Fi, Bluetooth and cellular modems. However, there are now many spinoffs muddying the waters. Among the more recent or evolving technologies are Zigbee, Bluetooth Low Energy, LoRa, Cat-M and Wi-Fi 6. An IoT product firm needs to find a supplier apprised of current and emerging technologies that has deep knowledge of the tradeoffs between risk, data and connectivity costs, power consumption and bandwidth. Selecting the wrong platform can result in excessive costs, quick charge depletion or inappropriate bandwidth, as well as obsolescence issues.

Identify a clear end-customer value proposition

There are so many opportunities to convert passive devices to become smart and connected IoT products. Before engaging a product development partner, make sure there is a solid business case. The IoT product company needs to ensure when the product is created that is serves a need or satisfies a desire for the end customer at a price he is willing to pay. This is the essence of value. Sometimes IoT product companies fall so in love with the product concept that they have an unrealistic sense of its market value. As with any new product, there has to be a known or unmet need for the consumer.

Be realistic about schedules

Too often, an IoT development client does not realize how long it will take to create a finished product. There are lots of developer kits out there along with single-board computers. On occasion, a “quick and dirty” proof of concept can be created. It may work — or possibly even look convincing. However, there can be a big difference between an engineer-created demo that is built in quantities of one or two, and a full product that has been designed to be reliable, tested and ready for scalable production. Also, it is typical that IoT devices have to go through one or more regulatory approval processes. Depending on the hardware and communications strategy, that could invoke processes such as CE, UL and those with the FCC.

If cellular is involved, carrier certification will be needed. Using a pre-certified cellular modem cuts down but does not eliminate the process. If the IoT product is highly size-constrained and requires design around a cellular chipset, the carrier certification process can take months. In working with a product development partner, creating a production-ready product or even a minimum viable product that can be deployed without concern for field failures or warranty issues takes time.


It is fundamental to ensure that the expectations of an IoT development client and their product development services partner are clear and explicit. When there are disconnects between the two parties, problems invariably arise at the end of a project when the partner believes it has satisfied expressed customer needs when the client’s understanding is that the deliverables do not align with their expectations. Putting these best practices to work will go a long way to establishing clarity around client satisfaction and the creation of an IoT product that provides clear and worthwhile value to the end-purchasing customer.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 21, 2019  10:44 AM

Let’s do the time warp: Look back at wireless predictions

Cees Links Profile: Cees Links
802.11, 802.11ax, BLE, Bluetooth, distributed Wi-Fi, Internet of Things, iot, IoT wireless, smart home, Wi-Fi

It’s astounding. Time is fleeting. Look back just 15 years or so and my, how things have changed. What were the hot tech gadgets and trends in the first few years of the 21st century? The Nokia 6610 was the best-selling cellphone in 2002; it featured text messaging — its memory could store up to 75 texts! — but didn’t have a camera or email or internet browsing. Desktop computers far outnumbered laptops. Netflix was still DVDs-by-mail only. There was no Facebook or Twitter. Remember Myspace?

This was also the time in history when Cees Links, now considered a Wi-Fi pioneer, made some bold predictions about the future in his 160-page e-book, The Spirit of Wi-Fi, along with an explanation of how the first wireless LANs were created and how an interesting, 1998 meeting with Steve Jobs changed the face of wireless communications.

Since we are now living in a new wireless world, let’s take a look at how Links’ vision compares to reality as we know it now — and get a sneak peek into what’s coming next.

Prediction #1: Smart everything

Links predicted that business people would have four gadgets to help them in their day: a computer notebook with Wi-Fi and Bluetooth, a palmtop with Bluetooth, a cell phone with GPRS and Bluetooth, and a wireless headset with Bluetooth.

The results: Pretty darn close. Said Links, “I was pretty close, if you replace the palmtop with a tablet. Now you have a laptop to do real work, a tablet for convenient reading and checking the internet on the road, and a smartphone when really nonstationary.” What amazes Links the most is the demise of phone communication — more specifically, how much phone communication has been largely replaced by text messaging, successor apps like WhatsApp and group chats.

“The other remarkable thing was that in those years, we spoke a lot about videophone conversations becoming commonplace — and our concern about the amount of bandwidth they would require,” Links said. Today, videophone is (almost) free with chat communications, but selfies and Instagram posts are way more popular. It seems that some two-way conversations have been replaced with a series of one-way lobs of status communications.

Prediction #2: Cell phones and palmtops would merge

Links predicted that the number of devices would be reduced by merging the functionalities of a cell phone and palmtop.

The results: Yes, but… The palmtop and the phone did indeed integrate, and the tablet emerged. Though, as Links points out, many people may not know that the tablet, an Apple Newton, originally launched in the mid-1990s. But it never really caught on. “It was too early,” Links said, “and the proper data-communication standards and infrastructure didn’t exist yet. Also, the MCUs — the brains of the computer — weren’t powerful enough.” This required essentially another decade of development.

But Links said what he really missed was the need for a tablet: “Frankly, I was initially skeptical when tablets came out. Now I see the tablet slowly starting to take over from the laptop, in the same way that chatting is taking over from emailing. So, who knows? The days of the laptop may be numbered.”

Prediction #3: Smart watches

The results: Bingo. “Let’s not forget the watch,” Links said back in 2002, questioning if it could play a larger role in the world of technological devices beyond mere accessory or jewelry. Clearly, smart watches, like the Apple Watch, Samsung watches and Fitbit devices, have brought this reality to life.

“Honestly, I’m surprised I mentioned that a watch could be more than jewelry,” Links said today. “But indeed, the thought of making the watch more useful than for merely tracking date and time has always lingered, and it still does. I think the watch industry so far has successfully kept the electronic watch at bay for two reasons: First, a watch is still a piece of jewelry, and second, the battery life is still short.”

Fitness trackers are in a similar market position. They’re encroaching on the watch industry, but Links’ expectation is that they won’t be successful in destabilizing it, much like smart watches. “I wear a Fitbit,” Links said, “but one that is purely sensing, as a simple bracelet. I love my jewelry watch, but wearing two watches is a little pathetic. Plus, I would get totally annoyed if they weren’t indicating exactly the same time!”

New predictions: The future of Wi-Fi

Now it’s time to look forward and hear what Links thinks is in store for the future. After all, Wi-Fi was in the early stages of adoption when he wrote his book. At the time, Links described it as “a rich standard that would be with us for the coming decades and provide a solid basis for newer capabilities.”

Given that Wi Fi 6 (802.11ax) is expected this year, what do you think will come next, and what challenges will have to be solved in the future?

Links: Wi-Fi (IEEE 802.11) is indeed still with us and going strong — no end of life in sight! From 802.11b to a, g, n, ac, ad and now Wi-Fi 6 (802.11ax), high-performance wireless technologies have been evolving from the beginning of this century. Essentially, there have always been two major drivers: good coverage in your whole house or office, and faster speed. There have been other underlying drivers, like reducing heat consumption — to avoid your smartphone from melting — and integrating functions while reducing size and price. At this moment, the need for higher data rates, bandwidth and capacity will continue, without compromising the coverage. This is in line with the fact that video continues to be more and more important. Why should it take hours to download the latest series before going on a trip?

I recently wrote a white paper, “Wi-Fi 6 (802.11ax): What’s It All About?,” that discusses why higher speed, capacity and bandwidth are the key ingredients to success today. Everybody is connected on the same channel at the same time, and we always want more speed. There will be no rest for service providers!

Let’s talk about the smart home. There are multiple use cases that could be created to make the home smarter, like smart lighting, home security or lifestyle monitoring. You mentioned in your book that household applications will grow quickly once the infrastructure is in place. Is this the case today?

Links: Interestingly, the idea of low-power Wi-Fi was floating around a lot, and what we see today is that both Zigbee — which is essentially low-power Wi Fi — and Bluetooth Low Energy have established themselves; although frankly, it took longer than I expected. I think it took longer because the value proposition of Zigbee and BLE is more difficult to grasp because its value is its close connection to data management and processing, that requires a complete different way of thinking.

Normally, a company begins with the business case for a product, which drives the application space. But with the smart home, it’s the other way around — the application is driving the business case. This means it has taken longer to establish the value of the smart home, but it’s getting there, slowly but surely.

The challenge is still the infrastructure. Each application almost needs its own gateway connected to the router to have lights, sensors or smart meters connected to the internet, making implementation unnecessarily expensive. One of the larger steps forward are routers and set-top boxes with Zigbee and Bluetooth Low Energy integrated, and that’s what the industry is working on now.

I think the future of the smart home is distributed Wi-Fi, with a pod in every room serving as an access point. With Wi-Fi 6 and distributed Wi-Fi, consumers will have Wi-Fi everywhere in their home or office. Each pod can also carry wireless communication technologies, like Zigbee or Bluetooth. It will also allow command through voice activation and enable talking to the internet as a common feature in every room. This new infrastructure will help develop multiple use cases in the smart home — all using the same infrastructure.

Finally, how about some new predictions with regard to the evolution of Wi-Fi. Where do you think we’ll stand in another 15 years?

Links: There’s no shortage of demand for both higher data rates and longer battery life, so developments in this area will continue. Nowadays, I have to charge my laptop and my phone every day, which is a nuisance that I grudgingly accept. Data rates continue to be a bottleneck, but that probably needs to be extended toward system-level performance. My Wi-Fi is way faster than the cable internet link to my house, and sometimes sitting behind a very fast connection, imagining an instant reaction on mouse clicks and no waiting, makes it clear that the industry still needs to improve a lot in basic needs.

But even more exciting is the interaction between wireless connectivity and artificial intelligence. Being able to exchange data all the time — from sensors to work data to exploring thoughts and ideas for leisure or finding opportunities for relaxation and enjoyment — when it’s connected to proper guidance based from someone who “knows you” and can help, wouldn’t that be a dream we wish could come true?

Breakthroughs in human life have always come from technology inventions beyond imagination — cranes to help us lift things, wheels to move us faster than we can walk, writing to help us remember more than we can keep in our head, printing to share ideas wider and faster than we could imagine. And today, connectivity allows us to live in a healthier, more comfortable and more eco-friendly way — and to make better decisions, faster.

A connected world is a better world. Here’s to the next 15 years of Wi-Fi — no doubt it’s a future of great possibility!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 20, 2019  2:48 PM

Do we all have to be up in the cloud? Enter edge computing

Joseph Dureau Profile: Joseph Dureau
Consumer IoT, edge, Edge computing, Enterprise IoT, Internet of Things, iot, IoT devices, IoT edge, IoT edge computing, IoT latency, iot privacy, IoT reliability

Although it’s hardly a secret, a steep rise in the number of connected devices around us is set to change the way we live, work and interact with technology. By 2025, forecasts indicate that there will be as many as 75 billion smart devices globally, introducing us to a new era of hyper-connectivity. These devices will not only collect data, but also produce and process information directly on the products closest to their users on the edge. Increased functionality and computing available on the edge is already changing the way companies design and build products, from intelligent construction site video surveillance to oil rig maintenance. In the following article, I will unpack how taking data processing out of the cloud and to the edge can positively impact reliability, privacy and latency.

What exactly is the edge?

Edge computing refers to applications, services and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics and the desired end-user experience.

While edge applications do not need to communicate with the cloud, they may still interact with servers and internet-based applications. Many of the most common edge devices feature physical sensors, such as temperature, lights and speakers, and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on a cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, companies can significantly reduce the volumes of data that must be moved and stored in the cloud, saving themselves time and money in the process.

The stakes are high

With edge computing set to change the way we live and work, it’s critical for businesses to understand what’s at stake for their business models, customer experiences and workforces. Edge computing impacts three dimensions: reliability, privacy and latency — each with profound implications for companies and consumers alike. Additionally, the convergence of edge computing and artificial intelligence is unlocking new opportunities for companies in 2020 and beyond.

A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in hard-to-reach environments. Many industrial and maintenance businesses simply cannot rely on internet connectivity for mission-critical applications. Wearables must also be resilient enough to perform without 4G. For these use cases and many more, offline reliability makes all the difference.

Protecting privacy is both a potential asset and a risk for businesses in a world where data breaches occur regularly. Consumers have become wary that their smart speakers — or the people behind them — are always listening and, rightfully, companies largely reliant on cloud technology have been scrutinized for what they know about users and what they do with that information.

Edge computing helps alleviate some of these concerns by bringing processing and collection into the environment(s) where the data is produced. The leading voice assistants on the market today, for example, systematically centralize, store and learn from every interaction end users have with them. Their records include raw audio data and the outputs of all algorithms involved, attached to logs of all actions taken by the assistant. The latest research and innovations also suggest that interactions are set to become significantly smoother and more relevant based on additional information about end users’ tastes, contacts, habits and so forth.

This creates a paradox for voice companies and beyond that rely on the cloud. For AI-powered voice assistants to be relevant and useful they must know more personal information about their users. Moving processing power to edge is the only way to offer the same level of performance without compromising on privacy.

In the simplest terms, latency refers to the time difference between an action and a response. You may have experienced latency when using a smartphone if you notice a slight delay in the time it takes to open an app after touching the icon on your screen. However, for many industrial use cases, there is more at risk than a poor user experience and making users wait. For manufacturing companies, mission-critical systems cannot afford the delay of sending information to off-site cloud databases. Cutting power to a machine split-seconds too late is the difference between avoiding and incurring physical damage.

When the computing is on the edge, latency just isn’t an issue. Customers and workers won’t have to wait while data is sent to and from a cloud server. Their maintenance reports, shipping lists or error logs are recorded and tracked in real time.

Local computing power becomes the norm

We are living in a centralized world, whether we think about it that way or not. Every time you turn on your mobile phone or open a SaaS application, you are essentially engaging with an interface that represents what is occurring on a cloud server. In his 2016 talk, “The End of Cloud Computing,” Andreessen Horowitz’s Peter Levine outlined a vision for the future of edge computing. “Your car is basically a data center on wheels. A drone is a data center on wings,” Levine quipped. Nearly three years later, Levine’s words couldn’t be more prophetic. With more and more applications capable of functioning in local environments due to innovations in edge computing, decentralization is becoming far more than just a trendy buzzword and companies and consumer are benefiting from improved reliability, privacy and latency among their IoT devices.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 20, 2019  10:49 AM

How to think about LPWAN for commercial real estate applications

Sharad Shankar Profile: Sharad Shankar
building management system, cellular IoT, Enterprise IoT, Internet of Things, iot, IoT application, IOT Network, IoT use cases, IoT wireless, LoraWAN, low-power wide area network, LPWAN, LTE-M, Narrowband IoT, NB-IoT, real estate, Sigfox, Smart Building

The data throughput from IoT devices will grow exponentially each year. Users rely on this data to inform decision-making, and more businesses are allowing their processes to be driven by IoT data. One salient example, which will be the focus of this article, is commercial real estate buildings; however, the information provided can be used for other applications as well.

Commercial real estate buildings have hundreds of pieces of equipment that require maintenance and other upkeep to make sure the asset functions as intended. With rooftop unit(s) in the penthouse mechanical room, domestic water pumps in the basement and air handler units in tenant spaces — installing sensors on each piece of equipment can be challenging. Traditional systems, such as building management systems, often wire sensors throughout the property — but the price tag for these systems can reach seven figures.

If you don’t want to wire every single sensor due to the cost and complexity, there are several questions to answer when evaluating IoT:

  1. What type of data do I need to capture for the pieces of equipment within my building?
  2. How critical are these systems? Do I need to control any of these pieces of equipment remotely?
  3. What network infrastructure is available in the building?
  4. How old is the building? What is the approximate height between floors? What is the thickness between concrete slabs?
  5. What is my overall implementation cost? Will there be any added subscription costs moving forward?

Based on the answers to the questions above, the infrastructure required for an IoT system can vary significantly. The most common building protocols between devices are BACnet MSTP/IP, Modbus RTU/TCP, digital outputs (dry or wet contacts), and analog outputs (4-20 mA, 0-10 V).

All of these protocols traditionally require a physical connection between the data logging equipment and each sensor. These data logging devices convert the analog output into a readable format respective to the sensor.

It is not always possible to achieve the desired economics with wired sensors across an entire building. We can use better, modern IoT systems that have the capability of reading common protocols by transmitting over a wireless communication medium. These wireless devices are using a version of a low-power wide area network (LPWAN). These technologies, generally battery operated, can send data on ad-hoc, intermittent and/or consistent time frequencies. This protocol is already in use in Europe and increasingly in North America and the rest of the world.

There are several LPWAN technologies, each with its pros and cons. Some of these include LTE-M (cellular), Narrowband-IoT (cellular), LoRaWAN and Sigfox. There are other options available, but these are the most common and currently available for deployment. Of the four, LoRaWAN is the most common worldwide. LTE-M and Narrowband-IoT (NB-IoT) are close seconds, but unlike LoRaWAN, the deployment is carrier-specific and not region-specific (more on this below).

You may be asking: For a wireless deployment, why not use Wi-Fi or Bluetooth? While these are viable options for some instances, they are not the preferred strategy for small amounts of data being transmitted across long distances and through walls.

The key difference here is that LPWAN technologies communicate in sub-GHz frequency. LPWAN can communicate through an entire building for the same reason that walkie-talkies and cordless phones in the 1990s had such a wide range. They communicated in a sub GHz range — in the United States about 900 MHz.

As the frequency decreases, the communication range between devices increases because the wireless signal generated by these devices has a higher chance to pass through floors and walls. Having said that, the amount of data the device can send decreases the further away it is from the communication recipient. The relationship between the two is a negative slope.

The advantage with LPWAN is that it minimizes the total time required to deploy an IoT sensor compared to a traditional physical connection between sensors. The cost of these technologies is also decreasing and will continue to do so over the next five to 10 years. The chart below summarizes all four technologies with their capabilities:

As seen above, each wireless technology has its pros and cons. However, as of today, the most promising long-range wireless technology is LoRaWAN, followed by LTE-M and NB-IoT. In the long run, LTE-M and NB-IoT are capable of taking higher market share, but are significantly affected by their ability to have a single provider cover global coverage.

For example, in the United States, AT&T and Verizon have adopted LTE-M, Sprint is currently deploying an LTE-M network, and T-Mobile has decided to ditch LTE-M and put resources into NB-IoT right away. Every carrier around the world is deploying at its own pace subject to regulations. There is no action plan for global coverage yet.

Unlike LTE-M and NB-IoT, LoRaWAN and Sigfox are carrier-free, which allows one to install wireless bridges in ad-hoc locations within buildings. Communication from these bridges can connect to existing building networks and/or cellular connectivity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 17, 2019  11:21 AM

Mastering IoT compliance in a GDPR world

Reggie Best Profile: Reggie Best
Compliance, GDPR, Internet of Things, iot, IoT compliance, iot security, Network monitoring, network visibility, Policy Management, securing IoT

Believe it or not, the one-year anniversary of the date the General Data Protection Regulation went into effect is almost upon us. The May 25, 2018 date will live in infamy for many organizations — particularly those that scrambled to get their people, processes and technologies in order ahead of the GDPR deadline.

But now that we’re a year in, and the initial chaos has died down, it’s the perfect time to reflect upon how GDPR has impacted organizations over the past 12 months — and not just from a data privacy perspective, but from an overall risk standpoint. With this in mind, I’d like to discuss how the regulation has prompted organizations to take IoT security more seriously.

IoT visibility is central to GDPR compliance

IoT has exploded the attack surface, making complete visibility into all connected endpoints across all computing environments a major challenge for many IT security teams. In my first IoT Agenda post, I talked about how, for many of our enterprise customers, it’s not uncommon that we profile a third or more IP-enabled endpoints as IoT-type devices — but many IT security teams don’t even know these devices are on their networks.

If you don’t know an IoT device is on your network, how can you protect it? You can’t. Not only does this introduce significant enterprise risk, but, in the case of GDPR, failing to implement proper security controls and data privacy measures on IoT devices leaves you vulnerable to compliance fines and associated consequences, such as a damaged reputation and loss of customers.

Rather than risk noncompliance, many organizations are getting serious about IoT security. And it all starts with visibility.

Network infrastructure monitoring technology helps organizations unearth unknown networks and attached endpoints to gain complete network visibility into all assets residing across all computing environments, as well as data at rest, data in transit and data in process. Armed with this information, internal teams can answer important questions such as: What IoT devices are on corporate networks? What data is being held, where and why? Who’s accessing that data and is access appropriate? Most importantly, with an accurate understanding of the state of their network infrastructure, IT security teams can protect all IoT devices and the data they analyze and transmit as specified by GDPR.

Beyond understanding what IoT devices are on corporate networks, IT security teams must also know where they are, what they’re doing and who they’re communicating with to ensure proper security and compliance measures and to protect personally identifiable information. Helpful practices to consider include monitoring IP addresses on the network and how they’re moving, identifying potential leak paths and unauthorized communications to and from the internet, and detecting anomalous traffic and behavior.

The role of automated policy management

If knowing is half the battle; the other half comprises action. Once companies have a comprehensive understanding of the endpoints and data residing on their networks, they can develop “zones of control,” bringing each under the right network policies and access rules.

Automated policy orchestration tools help companies achieve continuous security and compliance with regulations, such as GDPR, because they enforce appropriate access policies, rules and configurations on all assets, regardless of how they change or move. Additionally, in the event of noncompliance, policy orchestration technology makes it easier for IT security teams to identify where the violation occurred — a capability that comes in especially handy if an organization needs to meet GDPR’s 72-hour breach notification deadline.

Prioritizing IoT security

To achieve IoT compliance in a GDPR world, organizations must have real-time visibility across all of their networks, devices, endpoints and data. They must be able to immediately detect any suspicious network behavior or compliance gaps. And they must automate response so they can quickly remediate security and compliance violations. Network infrastructure monitoring technology and automated policy management, along with the above tips, are a good start to achieving not only GDPR compliance, but a stronger security posture. And now that most companies have mastered the basic building blocks of GDPR, hopefully we’ll continue to see IoT security and compliance become a greater priority over the coming year.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 16, 2019  2:54 PM

Scaling the industrial internet of things

Michael Schallehn Profile: Michael Schallehn
Enterprise IoT, IIoT, industrial internet of things, Industrial IoT, Internet of Things, iot, IoT integration, IoT partners, smart manufacturing

Back in 2016, predictive maintenance was forecast to be one of the most promising uses of industrial IoT. It seemed like a no-brainer: Who wouldn’t want better information to prevent equipment failure?

So it’s somewhat surprising that predictive maintenance has failed to take off as broadly as expected. A recent Bain survey of more than 600 executives found that industrial customers were less enthused about the potential of predictive maintenance in 2018 than they were two years earlier. In our conversations with buyers, we heard that implementing predictive maintenance systems has been more difficult than anticipated, and it has proven more challenging to extract valuable insights from the data.

Predictive maintenance is just one of many IoT use cases that customers have had difficulty integrating into their existing operational technology and IT systems. As companies in the industrial sector have invested in more proofs of concept, many have found IoT implementation more challenging than they anticipated.

Because of this, we find that customer expectations, while still bullish for the long term, dampened slightly for the next few years (see Figure 1). Our 2018 survey found that buyers of industrial IoT services and equipment expect implementation to take longer than they thought it would back in 2016.

Figure 1: The IoT outlook for 2020 has dampened, but long-term targets remain bullish. Note: Red dashed lines indicate 2016 forecasts.

Bain’s 2018 survey also found that among industrial customers, concerns over integration issues — in particular, technical expertise, data portability and transition risks — have become more acute over the past two years (see Figure 2).

Figure 2: More experience with proofs of concept has shifted IoT customers’ concerns in the past two years.

  • In 2016, customers were most concerned about security, returns on investment and the difficulty of integrating IoT systems with existing IT and operational technology.
  • In 2018, security and integration were still top concerns, indicating that tech vendors haven’t made much progress in addressing them.
  • Fewer customers are concerned about ROI than in 2016, perhaps because they have been satisfied by the returns on their early implementations. Industrial IoT use cases are beginning to deliver vendors’ promises.

Customers are increasingly worried about issues that arise during implementation: technical expertise, difficulties in porting data across different formats, and the transition risks. Proofs of concept have revealed these challenges, and companies now realize that although the effort pays off, the devil is in the details.

Despite these barriers, industrial IoT remains a promising opportunity. Bain’s research indicates that the industrial portion of IoT — including software, hardware and system solutions in the manufacturing, infrastructure, building and utilities sectors — continues to grow rapidly, and could double in size to more than $200 billion by 2021 (see Figure 3).

Figure 3: The industrial IoT market could reach $200 billion by 2021.

To capture that opportunity, device makers and other vendors of industrial and operational technology need to dramatically improve their software capabilities — not a historical strength for most of them. Leaders are investing heavily in acquisitions to obtain the necessary capabilities and talent. Most of this M&A activity targets companies further up the technology stack — in the realms of software and systems integration — than the core capabilities of industrial companies.

As vendors and manufacturers work to build scale, four groups of actions can help position them for long-term success:

  • Concentrate your bets. Focus on select use cases and tackle the key barriers to adoption: security, ROI and integration with IT and operational technology. Learn from proofs of concept and develop repeatable playbooks. Package IoT solutions into scalable products that you then can roll out to customers.
  • Find good partners. Acknowledge your capability gaps and find partners to address them. Work closely with cloud service providers, analytics vendors or enterprise IT vendors. At the same time, avoid broad and unwieldy alliances with too many players; partnerships tend to be more effective with a selective approach based on the use case.
  • Understand it may take a while to break even. Building capabilities and forging strong partnerships takes time, so commit to long investment periods. Approach the effort with a realistic view of the funding, timeline and staffing changes needed to deliver results.
  • Identify new talent. Your best employees excel at their jobs, but new operating models may require different skills. Learn to identify, hire and retain the entrepreneurial talent to thrive in your evolving business model.

Finally, companies will need to be clear on where IoT fits into their operating model. Some executives worry about new products cannibalizing existing products and their revenue. Companies need to allow internal entrepreneurs to build new lines of business without alienating the rest of the organization.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

This article was cowritten with Peter Bowen, Christopher Schorling and Oliver Straehle, partners with Bain’s Global Technology practice in Chicago, Frankfurt and Zurich, respectively.

May 16, 2019  12:26 PM

It’s time to get serious about securing the internet of things

Dan Timpson Profile: Dan Timpson
Certificate authority, digital certificate, Internet of Things, iot, IoT authentication, iot security, IoT threats, PKI, Public-key infrastructure, securing IoT

Not long ago, many IT leaders viewed IoT as little more than an interesting science project. Today, companies in every industry rely on IoT insights as part of their core business strategies. According to DigiCert’s recent “State of IoT Security Survey 2018” (registration required), 92% of companies expect IoT to be important to their business by 2020. In all, analysts project the global IoT market to more than double by 2021, reaching about $520 billion.

That’s a whole lot of new devices popping up on the world’s networks. And there’s one group that can’t wait to get their hands on them: cybercriminals. With nearly 10 billion IoT devices forecasted to come online by 2020, attackers see billions of new potential attack vectors. The fact that many connected devices still ship with inadequate security makes them even more attractive targets.

“Businesses are bringing insecure devices into their networks and then failing to update the software,” said Vik Patel in a recent conversation with Forbes. “Failing to apply security patches is not a new phenomenon, but insecure IoT devices with a connection to the open Internet are a disaster waiting to happen.”

For some companies, the disaster is already here. According to the DigiCert survey, among organizations struggling to master IoT security, 100% experienced a security mishap — IoT-based denial-of-service attacks, unauthorized devices access, data breaches, IoT-based malware — in the past two years. Those issues can carry a big price tag. A quarter of struggling organizations reported $34 million or more in incurred costs from IoT security mishaps.

Fortunately, the IoT security problem is far from intractable; there are mature, proven strategies that organizations can employ to secure connected devices. But the key is to take those steps before a vulnerability or breach is identified instead of trying to retrofit devices after the fact. The most successful organizations employ a security-by-design approach using public key infrastructure (PKI) and digital certificates. Using PKI to reinforce the security basics — authentication, encryption, data and system integrity — you can keep your IoT footprint ahead of the threat.

Why PKI?

Based on the same PKI standard that millions of websites rely on every day for secure connectivity, PKI provides an ideal framework for mutual trust and authentication in IoT. In addition to encrypting sensitive traffic, PKI verifies that IoT devices — and any users, devices or systems communicating with them — are who they claim to be. When all parties to IoT communications have a trusted digital certificate vouching for their legitimacy, it becomes much harder for malicious actors to, for example, hijack a device or inject malware into its firmware.

PKI is a perfect match for the exploding IoT sector, as it can provide trust and control at massive scales in a way that traditional authentication methods, like tokens and passwords, can’t. PKI provides:

  • Strong data protection: PKI can encrypt all data transmitted to and from IoT devices, so that even if a device is compromised, attackers can’t do anything with the data.
  • Minimal user interaction: With digital certificates, PKI authenticates users and devices behind the scenes, automatically — without the interruptions or user interaction required by passwords and token policies. Certificates also provide stronger identity by including information such as the device serial number.
  • Secure code: Using code signing certificates, companies can sign all code on the device firmware, assuring only trusted code can operate on the device. This protects against malware and supports secure over-the-air updates to the device.
  • Effortless scalability: Originally designed for huge networks and web services with vast numbers of users, PKI can easily scale to millions of IoT devices.

Companies typically choose from two options for deploying PKI: implementing and operating their own private PKI framework on-premises, or using hosted PKI services from a public certificate authority. Which approach is right for your organization? Let’s evaluate the four C’s.

The four C’s of PKI

Consideration #1: Control
How much control do you need over your certificate infrastructure? It often depends on your industry. In heavily regulated industries with complex, rigorous compliance requirements, many companies keep everything in-house. This does provide fine-grained control and comprehensive auditing capabilities. But it also requires significant time, money and expertise. Someone in the company must “own” the process of ensuring the framework adheres to industry standards, enforcing policies to establish trusted roles, managing key ceremonies and data storage policies, ensuring reliable certificate renewals and revocations, and much more. The resources required to do PKI in-house right — and the potential to do it wrong and cause significant damage — is often more than a company wants to take on.

This is not a small effort, and it’s not for amateurs. This is why many companies in less-regulated industries, and even many in regulated ones, prefer a hosted solution, letting a public certificate authorities handle all the complexity. If you need the control of an on-premises system but you don’t want the management headaches, some PKI providers offer hybrid models. These combine on-premises systems that can issue publicly trusted certificates through a secure gateway that communicates directly with a scalable cloud issuance platform.

Consideration #2: Cost
When you deploy and manage your own private PKI framework, you can build exactly the system you want. But don’t expect it to come cheap. Standing up an internal certificate authority entails initial hardware and software acquisition, and often extensive investments in training and personnel.

Beyond the initial implementation, expect to devote ongoing resources to maintaining the on-premises PKI framework: keeping up with audits, tracking evolving industry standards, updating hardware and software, as well as ensuring device integrity throughout the lifecycle. The total cost of ownership can be significant. This is why most companies that have the option choose hosted PKI offerings with more manageable, predictable economics.

Consideration #3: Crypto-agility
If your PKI is going to actually protect your IoT footprint — and your stakeholders’ or customers’ data — it needs to use up-to-date cryptography. That doesn’t happen automatically. Whoever owns the PKI framework needs to monitor and participate in standards groups to stay ahead of changing threats and implement continually evolving protocols. If you’re operating your own on-premises certificate authority, make sure you build that ongoing effort into your PKI budget.

Here again, companies across the board increasingly opt for cloud-hosted PKI. When standards shift or cryptographic properties change, a hosted PKI provider — whose core business entails investing in PKI staff and architecture — is ready for it. The leading public certificate authorities typically anticipate changes to curves, algorithms and hashes well before they are widely known or implemented. Getting ahead of quantum computing threats to today’s encryption algorithms looks to be the next frontier.

Consideration #4: Certificate Management
Managing the full lifecycle of certificates across a large volume of devices — even millions or billions of them — is not an easy task to run in-house. This requires a technology stack and strong policies and procedures to issue, install, renew and revoke certificates. Many vendors look to a trusted third party with automated offerings to discover and manage certificates, and especially one with a track record of having already provided certificate-based authentication for billions of connected devices.

Don’t put off IoT security

The days when you could launch an IoT initiative without a sound strategy to authenticate devices and ensure data and system integrity are over. IoT is very much on the radar of cybercriminals. And the costs of reactive, after-the-fact security can easily climb to tens of millions of dollars. On the flip side, those companies that do IoT security right can reap major benefits. Bain & Company found that enterprises would buy more IoT devices — and pay up to 22% more for them, on average — if they were more confident that they were secure.

Before launching any new IoT application, make sure you’re building standards-based PKI security and authentication into the basic design of your architecture. Whether you manage certificates yourself or work with a hosted certificate authority, you’ll sleep better knowing your IoT footprint can’t be easily compromised. And your business will be able to capitalize on the full power and potential of IoT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 16, 2019  11:50 AM

The realities of enterprise data lakes: The hype is over

Joy King Profile: Joy King
Big Data, Data Analytics, Data lake, Data Lakes, Edge analytics, enterprise data hub, IoT analytics, IoT data

For the last decade, we have seen an interest expand to an obsession: grab the data, store the data, keep the data. The software industry saw an opportunity to capitalize on this obsession, leading to an explosion of big data open source technologies, like Hadoop, as well as proprietary storage platforms advertising their value as “data lakes,” “enterprise data hubs” and more. In a growing number of industries, the goal has been achieved: Ensure you have as much data as possible and keep it for as long as possible.

Data is the new oil, but mining for value requires lots of pipes

Now comes the next phase of any hype cycle: reality. Data is indeed the new oil or the new gas, but none of this matters if value cannot be mined from the data. The oil and gas industry has an advantage. In each identified location, an oil well is created by drilling a long hole into the earth and a single steel pipe (casing) is placed in the hole, allowing the oil to be extracted. When the oil is extracted, it is processed and then brought to market. No integration with other oil repositories is necessary. Unfortunately, this is not the case when drilling for business value in individual data lakes and data hubs.

The manufacturing industry, and specifically manufacturing plants, is one of the most complex examples of how data is collected but is limited in value. Each plant collects its own data and, in some cases, stores that data in public or private clouds. The plant can (sometimes) use that data to optimize its own environment, understand what is happening and maybe even predict what is going to happen. But what about the trends, the insights and the continuous improvement practices that could benefit the multiple and widely distributed manufacturing plants for a large enterprise? What about the optimization between manufacturing, inventory management, supply chain and distribution? All of these groups have their own data, but a single pipe can’t reach it.

Beware of centralizing the data

So, what is the solution? Some would say that it’s critical to centralize the data, to ensure that it is co-located in a single public cloud object store or a centralized data warehouse. But the 1980s are well behind us. Nevertheless, this approach is gaining some attention in the market from some of the cloud vendors and cloud-specific systems. They have a great motivation to reach for the data because of a newer and very dangerous term: data egress. Getting data into a central location is not easy, but it is doable. Getting data out of a single cloud or solution provider is very, very difficult and expensive because once the data is within a single environment, the vendor has control. The reality of distributed data is what we have to address, and this requires a completely different approach. The new reality is bringing the analytics to the data where it resides and in what format it needs to be but ensuring that this does not result in a tangled mess of pipes.

Deriving business value with analytics

Successful industry disruptors focus on the business value derived from the analytical insights from their data, not simply the data collection. They each start and achieve an end goal in mind with a unified analytics platform that respects the data format and the data location, and applies a consistent and advanced set of analytical functions without demanding unnecessary, expensive and time-consuming data movement. A unified analytics platform is also open to integration within a broader ecosystem of applications, ETL tools, open source innovation and, perhaps most importantly, security and encryption technologies. On top of it all, a unified analytics platform delivers the performance needed for the scale of data that is the new normal in today’s world.

The hype cycle of data lakes is over, and the reality and the risk of data swamps are real. Combined with the confusion and uncertainty regarding the future of Hadoop, the time is now to architect — or rearchitect. And it’s imperative to start with the right end goal in mind: how to mine the data in a unified, protected and location-independent way without creating delays that undermine the business outcome.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 15, 2019  4:07 PM

How the 5G RAN supports the internet of things

Morné Erasmus Profile: Morné Erasmus
5G, 5G network, Internet of Things, iot, IOT Network, IoT networking, radio access network, RAN, Virtualization

Over the last few years, two acronyms that offer a vision of the future have become ubiquitous across the technology and communications industries: IoT and 5G. IoT is a broad term describing a future in which much of the electronic communications will be between autonomous devices. 5G is the fifth generation of mobile wireless. Let’s look at how the 5G radio access network (RAN) will support IoT.

IoT envisions communications between billions of devices. Although previous generations of mobile technology have provided some capability for machine-type communications, like meter reading and asset monitoring, these capabilities have either been designed as “over the top” custom applications or they have been built into 4G standards as an afterthought — think Narrowband-IoT and LTE-M, for example. 5G is the first standard to support machine communications from the beginning; the standard supports massive machine-type communications and ensures that the RAN will meet these needs.

Beyond changes to the standard, however, serving broad-based IoT requirements leads to additional considerations when designing the 5G RAN. Users will have high expectations that there will be sufficient coverage to deliver service to IoT devices anywhere they are installed, whether inside buildings or in the outside environment.

5G networks are being designed around three core application models:

  1. Speed — Enhanced mobility broadband
  2. IoT — Mass device deployments
  3. Ultra-low latency applications

How does the 5G RAN meet these challenges?

5G networks are being designed to be almost 10 times faster than 4G technology, so they support a far wider range of applications.

Source: Wiredscore

5G supports 10 times as many connections per square kilometer, which is important because there will be billions of IoT devices to connect. Support for more connections translates to less equipment in the network, smoother deployments and faster deployment times.

In addition, the 5G RAN will extend to both indoor and outdoor radio sites. We will need coverage in buildings and factories, so there will be a mix of indoor and outdoor network equipment. In-building wireless, including small cells and distributed antenna systems, will drive RAN signals into buildings, while outdoor applications will be supported with everything from macro cell towers to small cells.

Low latency
Support of services that require low or ultra-low latency can be achieved by optimizing the location/distribution of the baseband processing elements in the radio access network. This is supported within 5G standards by moving time-sensitive elements of baseband processing closer to the network edge.

In 5G, the baseband elements are broken down into a centralized unit (CU) for the non-real-time functions and a distributed unit (DU) for the real-time functions. To achieve minimum latency in the network, the DU and/or the CU are moved close to the network edge, typically to the radio access node or to a hub location.

Another major change in 5G architecture is the amount of virtualization in the RAN. A lot of the traditional centralized components will become virtualized and run on server platforms, which is ideal for IoT because it provides a common data center architecture that houses both data center resources and a piece of the RAN.

Look back on how computer virtualization changed IT architecture a decade ago by using underutilized compute resources among multiple devices. We are seeing the same shift in the RAN network, where a baseband unit’s capabilities are being shared between multiple cell sites.

Virtualized RAN components will be less expensive because they run as software on standard servers rather than the proprietary, hardware-specific devices used in the past. There will be variations depending on what the IoT network is trying to do; for example, low-latency applications will require that some of the RAN components are located closer to the end devices. Remote surgery and autonomous driving are use cases.

So, in terms of raw capabilities in the standard, a denser network and virtualization, the 5G RAN will support IoT applications with higher speeds, lower latency and greater reach. 5G will be the first cellular standard that satisfies IoT’s huge demands for connectivity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: